diff --git a/.gitbook/assets b/.gitbook/assets
deleted file mode 120000
index e4c5bd02..00000000
--- a/.gitbook/assets
+++ /dev/null
@@ -1 +0,0 @@
-../images/
\ No newline at end of file
diff --git a/images/gb-cover-final.png b/.gitbook/assets/gb-cover-final.png
similarity index 100%
rename from images/gb-cover-final.png
rename to .gitbook/assets/gb-cover-final.png
diff --git a/.gitbook/assets/gb-cover.png b/.gitbook/assets/gb-cover.png
new file mode 100644
index 00000000..9318e81e
Binary files /dev/null and b/.gitbook/assets/gb-cover.png differ
diff --git a/.gitbook/assets/image (1) (1).png b/.gitbook/assets/image (1) (1).png
new file mode 100644
index 00000000..991c84a9
Binary files /dev/null and b/.gitbook/assets/image (1) (1).png differ
diff --git a/.gitbook/assets/image (1).png b/.gitbook/assets/image (1).png
new file mode 100644
index 00000000..991c84a9
Binary files /dev/null and b/.gitbook/assets/image (1).png differ
diff --git a/.gitbook/assets/image.png b/.gitbook/assets/image.png
new file mode 100644
index 00000000..5447a925
Binary files /dev/null and b/.gitbook/assets/image.png differ
diff --git a/LICENSE b/LICENSE
deleted file mode 100644
index 261eeb9e..00000000
--- a/LICENSE
+++ /dev/null
@@ -1,201 +0,0 @@
- Apache License
- Version 2.0, January 2004
- http://www.apache.org/licenses/
-
- TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
-
- 1. Definitions.
-
- "License" shall mean the terms and conditions for use, reproduction,
- and distribution as defined by Sections 1 through 9 of this document.
-
- "Licensor" shall mean the copyright owner or entity authorized by
- the copyright owner that is granting the License.
-
- "Legal Entity" shall mean the union of the acting entity and all
- other entities that control, are controlled by, or are under common
- control with that entity. For the purposes of this definition,
- "control" means (i) the power, direct or indirect, to cause the
- direction or management of such entity, whether by contract or
- otherwise, or (ii) ownership of fifty percent (50%) or more of the
- outstanding shares, or (iii) beneficial ownership of such entity.
-
- "You" (or "Your") shall mean an individual or Legal Entity
- exercising permissions granted by this License.
-
- "Source" form shall mean the preferred form for making modifications,
- including but not limited to software source code, documentation
- source, and configuration files.
-
- "Object" form shall mean any form resulting from mechanical
- transformation or translation of a Source form, including but
- not limited to compiled object code, generated documentation,
- and conversions to other media types.
-
- "Work" shall mean the work of authorship, whether in Source or
- Object form, made available under the License, as indicated by a
- copyright notice that is included in or attached to the work
- (an example is provided in the Appendix below).
-
- "Derivative Works" shall mean any work, whether in Source or Object
- form, that is based on (or derived from) the Work and for which the
- editorial revisions, annotations, elaborations, or other modifications
- represent, as a whole, an original work of authorship. For the purposes
- of this License, Derivative Works shall not include works that remain
- separable from, or merely link (or bind by name) to the interfaces of,
- the Work and Derivative Works thereof.
-
- "Contribution" shall mean any work of authorship, including
- the original version of the Work and any modifications or additions
- to that Work or Derivative Works thereof, that is intentionally
- submitted to Licensor for inclusion in the Work by the copyright owner
- or by an individual or Legal Entity authorized to submit on behalf of
- the copyright owner. For the purposes of this definition, "submitted"
- means any form of electronic, verbal, or written communication sent
- to the Licensor or its representatives, including but not limited to
- communication on electronic mailing lists, source code control systems,
- and issue tracking systems that are managed by, or on behalf of, the
- Licensor for the purpose of discussing and improving the Work, but
- excluding communication that is conspicuously marked or otherwise
- designated in writing by the copyright owner as "Not a Contribution."
-
- "Contributor" shall mean Licensor and any individual or Legal Entity
- on behalf of whom a Contribution has been received by Licensor and
- subsequently incorporated within the Work.
-
- 2. Grant of Copyright License. Subject to the terms and conditions of
- this License, each Contributor hereby grants to You a perpetual,
- worldwide, non-exclusive, no-charge, royalty-free, irrevocable
- copyright license to reproduce, prepare Derivative Works of,
- publicly display, publicly perform, sublicense, and distribute the
- Work and such Derivative Works in Source or Object form.
-
- 3. Grant of Patent License. Subject to the terms and conditions of
- this License, each Contributor hereby grants to You a perpetual,
- worldwide, non-exclusive, no-charge, royalty-free, irrevocable
- (except as stated in this section) patent license to make, have made,
- use, offer to sell, sell, import, and otherwise transfer the Work,
- where such license applies only to those patent claims licensable
- by such Contributor that are necessarily infringed by their
- Contribution(s) alone or by combination of their Contribution(s)
- with the Work to which such Contribution(s) was submitted. If You
- institute patent litigation against any entity (including a
- cross-claim or counterclaim in a lawsuit) alleging that the Work
- or a Contribution incorporated within the Work constitutes direct
- or contributory patent infringement, then any patent licenses
- granted to You under this License for that Work shall terminate
- as of the date such litigation is filed.
-
- 4. Redistribution. You may reproduce and distribute copies of the
- Work or Derivative Works thereof in any medium, with or without
- modifications, and in Source or Object form, provided that You
- meet the following conditions:
-
- (a) You must give any other recipients of the Work or
- Derivative Works a copy of this License; and
-
- (b) You must cause any modified files to carry prominent notices
- stating that You changed the files; and
-
- (c) You must retain, in the Source form of any Derivative Works
- that You distribute, all copyright, patent, trademark, and
- attribution notices from the Source form of the Work,
- excluding those notices that do not pertain to any part of
- the Derivative Works; and
-
- (d) If the Work includes a "NOTICE" text file as part of its
- distribution, then any Derivative Works that You distribute must
- include a readable copy of the attribution notices contained
- within such NOTICE file, excluding those notices that do not
- pertain to any part of the Derivative Works, in at least one
- of the following places: within a NOTICE text file distributed
- as part of the Derivative Works; within the Source form or
- documentation, if provided along with the Derivative Works; or,
- within a display generated by the Derivative Works, if and
- wherever such third-party notices normally appear. The contents
- of the NOTICE file are for informational purposes only and
- do not modify the License. You may add Your own attribution
- notices within Derivative Works that You distribute, alongside
- or as an addendum to the NOTICE text from the Work, provided
- that such additional attribution notices cannot be construed
- as modifying the License.
-
- You may add Your own copyright statement to Your modifications and
- may provide additional or different license terms and conditions
- for use, reproduction, or distribution of Your modifications, or
- for any such Derivative Works as a whole, provided Your use,
- reproduction, and distribution of the Work otherwise complies with
- the conditions stated in this License.
-
- 5. Submission of Contributions. Unless You explicitly state otherwise,
- any Contribution intentionally submitted for inclusion in the Work
- by You to the Licensor shall be under the terms and conditions of
- this License, without any additional terms or conditions.
- Notwithstanding the above, nothing herein shall supersede or modify
- the terms of any separate license agreement you may have executed
- with Licensor regarding such Contributions.
-
- 6. Trademarks. This License does not grant permission to use the trade
- names, trademarks, service marks, or product names of the Licensor,
- except as required for reasonable and customary use in describing the
- origin of the Work and reproducing the content of the NOTICE file.
-
- 7. Disclaimer of Warranty. Unless required by applicable law or
- agreed to in writing, Licensor provides the Work (and each
- Contributor provides its Contributions) on an "AS IS" BASIS,
- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
- implied, including, without limitation, any warranties or conditions
- of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
- PARTICULAR PURPOSE. You are solely responsible for determining the
- appropriateness of using or redistributing the Work and assume any
- risks associated with Your exercise of permissions under this License.
-
- 8. Limitation of Liability. In no event and under no legal theory,
- whether in tort (including negligence), contract, or otherwise,
- unless required by applicable law (such as deliberate and grossly
- negligent acts) or agreed to in writing, shall any Contributor be
- liable to You for damages, including any direct, indirect, special,
- incidental, or consequential damages of any character arising as a
- result of this License or out of the use or inability to use the
- Work (including but not limited to damages for loss of goodwill,
- work stoppage, computer failure or malfunction, or any and all
- other commercial damages or losses), even if such Contributor
- has been advised of the possibility of such damages.
-
- 9. Accepting Warranty or Additional Liability. While redistributing
- the Work or Derivative Works thereof, You may choose to offer,
- and charge a fee for, acceptance of support, warranty, indemnity,
- or other liability obligations and/or rights consistent with this
- License. However, in accepting such obligations, You may act only
- on Your own behalf and on Your sole responsibility, not on behalf
- of any other Contributor, and only if You agree to indemnify,
- defend, and hold each Contributor harmless for any liability
- incurred by, or claims asserted against, such Contributor by reason
- of your accepting any such warranty or additional liability.
-
- END OF TERMS AND CONDITIONS
-
- APPENDIX: How to apply the Apache License to your work.
-
- To apply the Apache License to your work, attach the following
- boilerplate notice, with the fields enclosed by brackets "[]"
- replaced with your own identifying information. (Don't include
- the brackets!) The text should be enclosed in the appropriate
- comment syntax for the file format. We also recommend that a
- file or class name and description of purpose be included on the
- same "printed page" as the copyright notice for easier
- identification within third-party archives.
-
- Copyright [yyyy] [name of copyright owner]
-
- Licensed under the Apache License, Version 2.0 (the "License");
- you may not use this file except in compliance with the License.
- You may obtain a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
- Unless required by applicable law or agreed to in writing, software
- distributed under the License is distributed on an "AS IS" BASIS,
- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- See the License for the specific language governing permissions and
- limitations under the License.
diff --git a/README.md b/README.md
index 984c1ca2..3e652c2e 100644
--- a/README.md
+++ b/README.md
@@ -1,42 +1,33 @@
---
-description: Get started with the Cisco Crosswork NSO documentation guides.
-icon: power-off
-cover: images/gb-cover-final.png
-coverY: -33.22891656662665
+description: Supplementary documentation and resources for your NSO deployment.
+icon: paper-plane
+cover: .gitbook/assets/gb-cover-final.png
+coverY: -32.46361044417767
+layout:
+ width: default
+ cover:
+ visible: true
+ size: hero
+ title:
+ visible: true
+ description:
+ visible: true
+ tableOfContents:
+ visible: true
+ outline:
+ visible: true
+ pagination:
+ visible: true
+ metadata:
+ visible: true
---
-# Start
+# Overview
-Use this page to navigate your way through the NSO documentation and access the resources most relevant to your role.
+## NSO Resources
-## NSO Roles
+
Platform Tools
Add-on packages and tools for your NSO deployment.
-
-{% hint style="info" %}
-A more comprehensive list of learning resources and associated material is available on the [Learning Paths](https://nso-docs.cisco.com/learn-nso/learning-paths) page.
-{% endhint %}
-
-## Work with NSO
-
-For users working in a production-wide NSO deployment.
-
-### Administration
-
-
diff --git a/SUMMARY.md b/SUMMARY.md
index 736ea8f5..98229f0d 100644
--- a/SUMMARY.md
+++ b/SUMMARY.md
@@ -1,200 +1,31 @@
# Table of contents
-* [Start](README.md)
-* [What's New](whats-new.md)
-
-## Administration
-
-* [Get Started](administration/get-started.md)
-* [Installation and Deployment](administration/installation-and-deployment/README.md)
- * [Local Install](administration/installation-and-deployment/local-install.md)
- * [System Install](administration/installation-and-deployment/system-install.md)
- * [Post-Install Actions](administration/installation-and-deployment/post-install-actions/README.md)
- * [Explore the Installation](administration/installation-and-deployment/post-install-actions/explore-the-installation.md)
- * [Start and Stop NSO](administration/installation-and-deployment/post-install-actions/start-stop-nso.md)
- * [Create NSO Instance](administration/installation-and-deployment/post-install-actions/create-nso-instance.md)
- * [Enable Development Mode](administration/installation-and-deployment/post-install-actions/enable-development-mode.md)
- * [Running NSO Examples](administration/installation-and-deployment/post-install-actions/running-nso-examples.md)
- * [Migrate to System Install](administration/installation-and-deployment/post-install-actions/migrate-to-system-install.md)
- * [Modify Examples for System Install](administration/installation-and-deployment/post-install-actions/modify-examples-for-system-install.md)
- * [Uninstall Local Install](administration/installation-and-deployment/post-install-actions/uninstall-local-install.md)
- * [Uninstall System Install](administration/installation-and-deployment/post-install-actions/uninstall-system-install.md)
- * [Containerized NSO](administration/installation-and-deployment/containerized-nso.md)
- * [Development to Production Deployment](administration/installation-and-deployment/development-to-production-deployment/README.md)
- * [Develop and Deploy a Nano Service](administration/installation-and-deployment/deployment/develop-and-deploy-a-nano-service.md)
- * [Secure Deployment](administration/installation-and-deployment/deployment/secure-deployment.md)
- * [Deployment Example](administration/installation-and-deployment/deployment/deployment-example.md)
- * [Upgrade NSO](administration/installation-and-deployment/upgrade-nso.md)
-* [Management](administration/management/README.md)
- * [System Management](administration/management/system-management/README.md)
- * [Cisco Smart Licensing](administration/management/system-management/cisco-smart-licensing.md)
- * [Log Messages and Formats](administration/management/system-management/log-messages-and-formats.md)
- * [Alarm Types](administration/management/system-management/alarms.md)
- * [Package Management](administration/management/package-mgmt.md)
- * [High Availability](administration/management/high-availability.md)
- * [AAA Infrastructure](administration/management/aaa-infrastructure.md)
- * [NED Administration](administration/management/ned-administration.md)
-* [Advanced Topics](administration/advanced-topics/README.md)
- * [Locks](administration/advanced-topics/locks.md)
- * [CDB Persistence](administration/advanced-topics/cdb-persistence.md)
- * [IPC Connection](administration/advanced-topics/ipc-connection.md)
- * [Cryptographic Keys](administration/advanced-topics/cryptographic-keys.md)
- * [Service Manager Restart](administration/advanced-topics/restart-strategies-for-service-manager.md)
- * [IPv6 on Northbound Interfaces](administration/advanced-topics/ipv6-on-northbound-interfaces.md)
- * [Layered Service Architecture](administration/advanced-topics/layered-service-architecture.md)
-
-## Operation & Usage
-
-* [Get Started](operation-and-usage/get-started.md)
-* [CLI](operation-and-usage/cli/README.md)
- * [Introduction to NSO CLI](operation-and-usage/cli/introduction-to-nso-cli.md)
- * [CLI Commands](operation-and-usage/cli/cli-commands.md)
-* [Web UI](operation-and-usage/webui/README.md)
- * [Home](operation-and-usage/webui/home.md)
- * [Devices](operation-and-usage/webui/devices.md)
- * [Services](operation-and-usage/webui/services.md)
- * [Config Editor](operation-and-usage/webui/config-editor.md)
- * [Tools](operation-and-usage/webui/tools.md)
-* [Operations](operation-and-usage/operations/README.md)
- * [Basic Operations](operation-and-usage/operations/basic-operations.md)
- * [NEDs and Adding Devices](operation-and-usage/operations/neds-and-adding-devices.md)
- * [Manage Network Services](operation-and-usage/operations/managing-network-services.md)
- * [Device Manager](operation-and-usage/operations/nso-device-manager.md)
- * [Out-of-band Interoperation](operation-and-usage/operations/out-of-band-interoperation.md)
- * [SSH Key Management](operation-and-usage/operations/ssh-key-management.md)
- * [Alarm Manager](operation-and-usage/operations/alarm-manager.md)
- * [Plug-and-Play Scripting](operation-and-usage/operations/plug-and-play-scripting.md)
- * [Compliance Reporting](operation-and-usage/operations/compliance-reporting.md)
- * [Listing Packages](operation-and-usage/operations/listing-packages.md)
- * [Lifecycle Operations](operation-and-usage/operations/lifecycle-operations.md)
- * [Network Simulator](operation-and-usage/operations/network-simulator-netsim.md)
-
-## Development
-
-* [Get Started](development/get-started.md)
-* [Introduction to Automation](development/introduction-to-automation/README.md)
- * [CDB and YANG](development/introduction-to-automation/cdb-and-yang.md)
- * [Basic Automation with Python](development/introduction-to-automation/basic-automation-with-python.md)
- * [Develop a Simple Service](development/introduction-to-automation/develop-a-simple-service.md)
- * [Applications in NSO](development/introduction-to-automation/applications-in-nso.md)
-* [Core Concepts](development/core-concepts/README.md)
- * [Services](development/core-concepts/services.md)
- * [Implementing Services](development/core-concepts/implementing-services.md)
- * [Templates](development/core-concepts/templates.md)
- * [Nano Services](development/core-concepts/nano-services.md)
- * [Packages](development/core-concepts/packages.md)
- * [Using CDB](development/core-concepts/using-cdb.md)
- * [YANG](development/core-concepts/yang.md)
- * [NSO Concurrency Model](development/core-concepts/nso-concurrency-model.md)
- * [Service Handling of Ambiguous Device Models](development/core-concepts/service-handling-of-ambiguous-device-models.md)
- * [NSO Virtual Machines](development/core-concepts/nso-virtual-machines/README.md)
- * [NSO Python VM](development/core-concepts/nso-virtual-machines/nso-python-vm.md)
- * [NSO Java VM](development/core-concepts/nso-virtual-machines/nso-java-vm.md)
- * [Embedded Erlang Applications](development/core-concepts/nso-virtual-machines/embedded-erlang-applications.md)
- * [API Overview](development/core-concepts/api-overview/README.md)
- * [Python API Overview](development/core-concepts/api-overview/python-api-overview.md)
- * [Java API Overview](development/core-concepts/api-overview/java-api-overview.md)
- * [Northbound APIs](development/core-concepts/northbound-apis/README.md)
- * [NSO NETCONF Server](development/core-concepts/northbound-apis/nso-netconf-server.md)
- * [RESTCONF API](development/core-concepts/northbound-apis/restconf-api.md)
- * [NSO SNMP Agent](development/core-concepts/northbound-apis/nso-snmp-agent.md)
-* [Advanced Development](development/advanced-development/README.md)
- * [Development Environment and Resources](development/advanced-development/development-environment-and-resources.md)
- * [Developing Services](development/advanced-development/developing-services/README.md)
- * [Services Deep Dive](development/advanced-development/developing-services/services-deep-dive.md)
- * [Service Development Using Java](development/advanced-development/developing-services/service-development-using-java.md)
- * [NSO Developer Studio](https://nso-docs.cisco.com/resources/platform-tools/nso-developer-studio)
- * [Developing Packages](development/advanced-development/developing-packages.md)
- * [Developing NEDs](development/advanced-development/developing-neds/README.md)
- * [NETCONF NED Development](development/advanced-development/developing-neds/netconf-ned-development.md)
- * [CLI NED Development](development/advanced-development/developing-neds/cli-ned-development.md)
- * [Generic NED Development](development/advanced-development/developing-neds/generic-ned-development.md)
- * [SNMP NED](development/advanced-development/developing-neds/snmp-ned.md)
- * [NED Upgrades and Migration](development/advanced-development/developing-neds/ned-upgrades-and-migration.md)
- * [Developing Alarm Applications](development/advanced-development/developing-alarm-applications.md)
- * [Kicker](development/advanced-development/kicker.md)
- * [Scaling and Performance Optimization](development/advanced-development/scaling-and-performance-optimization.md)
- * [Progress Trace](development/advanced-development/progress-trace.md)
- * [Web UI Development](development/advanced-development/web-ui-development/README.md)
- * [JSON-RPC API](development/advanced-development/web-ui-development/json-rpc-api.md)
-* [Connected Topics](development/connected-topics/README.md)
- * [SNMP Notification Receiver](development/connected-topics/snmp-notification-receiver.md)
- * [Web Server](development/connected-topics/web-server.md)
- * [Scheduler](development/connected-topics/scheduler.md)
- * [External Logging](development/connected-topics/external-logging.md)
- * [Encrypted Strings](development/connected-topics/encryption-keys.md)
-
-## Resources
-
-* [Manual Pages](resources/man/README.md)
- * [clispec](resources/man/clispec.5.md)
- * [confd\_lib](resources/man/confd_lib.3.md)
- * [confd\_lib\_cdb](resources/man/confd_lib_cdb.3.md)
- * [confd\_lib\_dp](resources/man/confd_lib_dp.3.md)
- * [confd\_lib\_events](resources/man/confd_lib_events.3.md)
- * [confd\_lib\_ha](resources/man/confd_lib_ha.3.md)
- * [confd\_lib\_lib](resources/man/confd_lib_lib.3.md)
- * [confd\_lib\_maapi](resources/man/confd_lib_maapi.3.md)
- * [confd\_types](resources/man/confd_types.3.md)
- * [mib\_annotations](resources/man/mib_annotations.5.md)
- * [ncs](resources/man/ncs.1.md)
- * [ncs-backup](resources/man/ncs-backup.1.md)
- * [ncs-collect-tech-report](resources/man/ncs-collect-tech-report.1.md)
- * [ncs-installer](resources/man/ncs-installer.1.md)
- * [ncs-maapi](resources/man/ncs-maapi.1.md)
- * [ncs-make-package](resources/man/ncs-make-package.1.md)
- * [ncs-netsim](resources/man/ncs-netsim.1.md)
- * [ncs-project](resources/man/ncs-project.1.md)
- * [ncs-project-create](resources/man/ncs-project-create.1.md)
- * [ncs-project-export](resources/man/ncs-project-export.1.md)
- * [ncs-project-git](resources/man/ncs-project-git.1.md)
- * [ncs-project-setup](resources/man/ncs-project-setup.1.md)
- * [ncs-project-update](resources/man/ncs-project-update.1.md)
- * [ncs-setup](resources/man/ncs-setup.1.md)
- * [ncs-uninstall](resources/man/ncs-uninstall.1.md)
- * [ncs.conf](resources/man/ncs.conf.5.md)
- * [ncs\_cli](resources/man/ncs_cli.1.md)
- * [ncs\_cmd](resources/man/ncs_cmd.1.md)
- * [ncs\_load](resources/man/ncs_load.1.md)
- * [ncsc](resources/man/ncsc.1.md)
- * [tailf\_yang\_cli\_extensions](resources/man/tailf_yang_cli_extensions.5.md)
- * [tailf\_yang\_extensions](resources/man/tailf_yang_extensions.5.md)
-
-## Developer Reference
-
-* [Python API Reference](developer-reference/pyapi/README.md)
- * [ncs Module](developer-reference/pyapi/ncs.md)
- * [ncs.alarm Module](developer-reference/pyapi/ncs.alarm.md)
- * [ncs.application Module](developer-reference/pyapi/ncs.application.md)
- * [ncs.cdb Module](developer-reference/pyapi/ncs.cdb.md)
- * [ncs.dp Module](developer-reference/pyapi/ncs.dp.md)
- * [ncs.experimental Module](developer-reference/pyapi/ncs.experimental.md)
- * [ncs.log Module](developer-reference/pyapi/ncs.log.md)
- * [ncs.maagic Module](developer-reference/pyapi/ncs.maagic.md)
- * [ncs.maapi Module](developer-reference/pyapi/ncs.maapi.md)
- * [ncs.progress Module](developer-reference/pyapi/ncs.progress.md)
- * [ncs.service\_log Module](developer-reference/pyapi/ncs.service_log.md)
- * [ncs.template Module](developer-reference/pyapi/ncs.template.md)
- * [ncs.util Module](developer-reference/pyapi/ncs.util.md)
- * [\_ncs Module](developer-reference/pyapi/_ncs.md)
- * [\_ncs.cdb Module](developer-reference/pyapi/_ncs.cdb.md)
- * [\_ncs.dp Module](developer-reference/pyapi/_ncs.dp.md)
- * [\_ncs.error Module](developer-reference/pyapi/_ncs.error.md)
- * [\_ncs.events Module](developer-reference/pyapi/_ncs.events.md)
- * [\_ncs.ha Module](developer-reference/pyapi/_ncs.ha.md)
- * [\_ncs.maapi Module](developer-reference/pyapi/_ncs.maapi.md)
-* [Java API Reference](developer-reference/java-api-reference.md)
-* [Erlang API Reference](developer-reference/erlang/README.md)
- * [econfd Module](developer-reference/erlang/econfd.md)
- * [econfd_cdb Module](developer-reference/erlang/econfd_cdb.md)
- * [econfd_ha Module](developer-reference/erlang/econfd_ha.md)
- * [econfd_logsyms Module](developer-reference/erlang/econfd_logsyms.md)
- * [econfd_maapi Module](developer-reference/erlang/econfd_maapi.md)
- * [econfd_notif Module](developer-reference/erlang/econfd_notif.md)
- * [econfd_schema Module](developer-reference/erlang/econfd_schema.md)
-* [RESTCONF API](developer-reference/restconf-api/README.md)
- * [Sample RESTCONF API Docs](https://developer.cisco.com/docs/nso/overview/)
-* [NETCONF Interface](developer-reference/netconf-interface.md)
-* [JSON-RPC API](developer-reference/json-rpc-api.md)
-* [SNMP Agent](developer-reference/snmp-agent.md)
-* [XPath](developer-reference/xpath.md)
+* [Overview](README.md)
+
+## Platform Tools
+
+* [Observability Exporter](platform-tools/observability-exporter.md)
+* [Phased Provisioning](platform-tools/phased-provisioning.md)
+* [Resource Manager (4.2.12)](platform-tools/resource-manager/README.md)
+ * [Resource Manager API Guide (4.2.12)](platform-tools/resource-manager/resource-manager-api-guide.md)
+* [NSO Developer Studio](platform-tools/nso-developer-studio.md)
+
+## Best Practices
+
+* [NSO on Kubernetes](best-practices/nso-on-kubernetes.md)
+* [Network Automation Delivery Model](best-practices/network-automation-delivery-model.md)
+* [Scaling and Performance Optimization](best-practices/scaling-and-performance-optimization.md)
+
+## NSO Resources
+
+* [NSO on GitHub](nso-resources/nso-on-github.md)
+* [Postman Collections](nso-resources/postman-collections.md)
+* [Developer Support](nso-resources/developer-support.md)
+* [NSO Changelog Explorer](nso-resources/nso-changelog-explorer.md)
+* [NED Changelog Explorer](nso-resources/ned-changelog-explorer.md)
+* [NED Capabilities Explorer](nso-resources/ned-capabilities-explorer.md)
+* [Communities](nso-resources/communities/README.md)
+ * [Blogs](https://community.cisco.com/t5/nso-developer-hub-blogs/bg-p/5672j-blogs-dev-nso)
+ * [Community Forum](https://community.cisco.com/t5/nso-developer-hub/ct-p/5672j-dev-nso)
+ * [DevDays Hub](https://video.cisco.com/category/videos/nso-developer-days-event-hub)
+* [Support & Downloads](nso-resources/support-and-downloads.md)
diff --git a/administration/advanced-topics/README.md b/administration/advanced-topics/README.md
deleted file mode 100644
index 85db95fe..00000000
--- a/administration/advanced-topics/README.md
+++ /dev/null
@@ -1,7 +0,0 @@
----
-description: Deep-dive into advanced NSO concepts.
-icon: layer-plus
----
-
-# Advanced Topics
-
diff --git a/administration/advanced-topics/cdb-persistence.md b/administration/advanced-topics/cdb-persistence.md
deleted file mode 100644
index 52e7a906..00000000
--- a/administration/advanced-topics/cdb-persistence.md
+++ /dev/null
@@ -1,69 +0,0 @@
----
-description: Select the optimal CDB persistence mode for your use case.
----
-
-# CDB Persistence
-
-The Configuration Database (CDB) is a built-in datastore for NSO, specifically designed for network automation use cases and backed by the YANG schema. Since NSO 6.4, the CDB can be configured to operate in one of the two distinct modes: `in-memory-v1` and `on-demand-v1`.
-
-The `in-memory-v1` mode keeps all the configuration data in RAM for the fastest access time. New data is persisted to disk in the form of journal (WAL) files, which the system uses on every restart to reconstruct the RAM database. But the amount of RAM needed is proportional to the number of managed devices and services. When NSO is used to manage a large network, the amount of needed RAM can be quite large. This is the only CDB persistence mode available before NSO 6.4.
-
-The `on-demand-v1` mode loads data on demand from the disk into the RAM and supports offloading the least-used data to free up memory. Loading only the compiled YANG schema initially (in the form of .fxs files) results in faster system startup times. This mode was first introduced in NSO 6.4.
-
-{% hint style="warning" %}
-For reliable storage of the configuration on disk, regardless of the persistence mode, the CDB requires that the file system correctly implements the standard primitives for file synchronization and truncation. For this reason (as well as for performance), NFS or other network file systems are unsuitable for use with the CDB - they may be acceptable for development, but using them in production is unsupported and strongly discouraged.
-{% endhint %}
-
-Compared to `in-memory-v1`, `on-demand-v1` mode has a number of benefits:
-
-* **Faster startup time**: Data is not loaded into memory at startup; only the schema is.
-* **Lower memory requirements**: Data is loaded into memory only when needed and offloaded when not.
-* **Faster sync of high-availability nodes**: Only subscribed data on the followers is loaded at once.
-* **Background compaction**: The compaction process no longer locks the CDB, allowing writes to proceed uninterrupted.
-
-While the `on-demand-v1` mode is as fast for reads of "hot" data (already in memory) as the `in-memory-v1` mode, reads are slower for "cold" data (not loaded in memory), since the data first has to be read from disk. In turn, this results in a bigger variance in the time that a read takes in the `on-demand-v1` mode, based on whether the data is already available in RAM or not. The variance could express in different ways, for example, by taking a longer time to produce the service mapping or creating a rollback for the first request. To lessen the effect, we highly recommend fast storage, such as NVMe flash drives.
-
-Furthermore, the two modes differ in the way they internally organize and store data, resulting in different performance characteristics. If sufficient RAM is available, in some cases, `in-memory-v1` performs better, while in others, `on-demand-v1` performs better. One known case where the performance of `on-demand-v1` does not reach that of `in-memory-v1` is deleting large trees of data. But in general, only extensive testing of the specific use case can tell which mode performs better.
-
-As a rule of thumb, we recommend the `on-demand-v1` mode, as it has typical performance comparable to `in-memory-v1` but has better maintainability properties. However, if performance requirements and testing favor the `in-memory-v1` mode, that may be a viable choice. Discounting the migration time, you can easily switch between the two modes with automatic migration at system startup.
-
-## Configuring Persistence Mode
-
-The CDB persistence is configured under `/ncs-config/cdb/persistence` in the `ncs.conf` file. The `format` leaf selects the desired persistence mode, either `on-demand-v1` or `in-memory-v1` (default `in-memory-v1`), and the system automatically migrates the data on the next start if needed. Note that the system will not be available for the migration duration.
-
-With the `on-demand-v1` mode, additional offloading configuration under `offload` container becomes relevant (`in-memory-v1` keeps all data in RAM and does not perform any offloading). The `offload/interval` specifies how often the system checks its memory consumption and starts the offload process if required.
-
-During the offloading process, data is evicted from memory:
-
-1. If the piece of data was last accessed more than `offload/threshold/max-age` ago (the default value of infinity disables this check).
-2. The least-recently-used items are evicted until their usage drops below the allowed amount.
-
-The allowed amount is defined either by the absolute value `offload/threshold/megabytes` or by `offload/threshold/system-memory-percentage`, where the value is calculated dynamically based on the available system RAM. We recommend using the latter unless testing has shown specific requirements.
-
-The actual value should be adjusted according to the use case and system requirements; there is no single optimal setting for all cases. We recommend you start with defaults and then adjust according to observations. You can enable the new `/ncs-config/cdb/persistence/db-statistics` property to aid you in this task (producing `LOG` files inside the CDB directory), as well as the counters and gauges that are available under `/ncs:metric/sysadmin/*/cdb`.
-
-## Compaction
-
-For durability, improved performance, and snapshot isolation, CDB writes in NSO use data structures, such as a write-ahead log (WAL), that require periodic compaction.
-
-For example, the `in-memory-v1` persistence mode appends a new log entry for each CDB transaction to the target datastore WAL file (`A.cdb` for configuration, `O.cdb` for operational, and `S.cdb` for snapshot datastore). Depending on the size and number of transactions towards the system, these files will grow in size leading to increased disk utilization, longer boot times, and longer initial data synchronization time when setting up a high-availability cluster using this persistence mode.
-
-Compaction is a mechanism used to reduce the size of the write-ahead logs to a minimum. In `on-demand-v1` mode, it is automatic, non-configurable, and runs in the background without affecting the ongoing transactions.
-
-But in `in-memory-v1` mode, it works by replacing an existing write-ahead log, which is composed of a number of consecutive transaction logs created in run-time, with a single transaction log representing the full current state of the datastore. From this perspective, a compaction acts similarly to a write transaction towards a datastore. To ensure data integrity, 'write' transactions towards the datastore are not permitted during the time compaction takes place. For this reason, NSO exposes a number of settings to control the compaction process in `in-memory-v1` mode (these have no effect for `on-demand-v1`).
-
-### Compacting In-Memory CDB
-
-By default, compaction is handled automatically by the CDB. After each transaction, CDB evaluates whether compaction is required for the affected datastore.
-
-This is done by examining the number of added nodes as well as the file size changes since the last performed compaction. The thresholds used can be modified in the `ncs.conf` file by configuring the `/ncs-config/compaction/file-size-relative`, `/ncs-config/compaction/file-size-absolute`, and `/ncs-config/compaction/num-node-relative` settings.
-
-It is also possible to automatically trigger compaction after a set number of transactions by setting the `/ncs-config/compaction/num-transaction` property.
-
-In the configuration datastore, compaction is by default delayed by 5 seconds when the threshold is reached to prevent any upcoming write transaction from being blocked. If the system is idle during these 5 seconds, meaning that there is no new transaction, the compaction will initiate. Otherwise, compaction is delayed by another 5 seconds. The delay time can be configured in `ncs.conf` by setting the `/ncs-config/compaction/delayed-compaction-timeout` property.
-
-As compaction may require a significant amount of time, it may be preferable to disable automatic compaction by CDB and instead trigger compaction manually according to specific needs. If doing so, it is highly recommended to have another automated system in place. Automation of compaction can be done by using a scheduling mechanism such as CRON or by using the NCS scheduler. See [Scheduler](../../development/connected-topics/scheduler.md) for more information.
-
-By default, CDB may perform compaction during its boot process. This may be disabled, if required, by starting NSO with the flag `--disable-compaction-on-start`.
-
-Additionally, CDB CAPI provides a set of functions that may be used to create an external mechanism for compaction. See `cdb_initiate_journal_compaction()`, `cdb_initiate_journal_dbfile_compaction()`, and `cdb_get_compaction_info()` in [confd\_lib\_cdb(3)](../../resources/man/confd_lib_cdb.3.md) in Manual Pages.
diff --git a/administration/advanced-topics/cryptographic-keys.md b/administration/advanced-topics/cryptographic-keys.md
deleted file mode 100644
index 09f3211a..00000000
--- a/administration/advanced-topics/cryptographic-keys.md
+++ /dev/null
@@ -1,183 +0,0 @@
----
-description: >-
- Store strings in NSO that are encrypted and decrypted using cryptographic
- keys.
----
-
-# Cryptographic Keys
-
-By using the NSO built-in encrypted YANG extension types `tailf:aes-cfb-128-encrypted-string` or `tailf:aes-256-cfb-128-encrypted-string`, it is possible to store encrypted string values in NSO. See the [tailf\_yang\_extensions(5)](../../resources/man/tailf_yang_extensions.5.md#yang-types-2) man page for more details on the encrypted string YANG extension types.
-
-## Providing Keys
-
-NSO supports defining one or more sets of cryptographic keys directly in `ncs.conf` or using an external command. Three methods can be used to configure the keys in `ncs.conf`:
-
-* External command providing keys under `/ncs-config/encrypted-strings/external-keys`.
-* Key rotation under `/ncs-config/encrypted-strings/key-rotation`.
-* Legacy (single generation) format: `/ncs-config/encrypted-strings/AESCFB128` and `/ncs-config/encrypted-strings/AES256CFB128` .
-
-### NSO Installer-Provided Cryptography Keys
-
-* **Local installation**: Dummy keys are provided in legacy format in `ncs.conf` for development purposes. For deployment, the keys must be changed to random values. Example local installation `ncs.conf` (do not reuse):
-
- ```xml
-
-
-
- 0123456789abcdef0123456789abcdeg
-
-
- 0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdeg
-
-
-
- ```
-* **System installation**: Random keys are generated in the legacy format stored in `${NCS_CONFIG_DIR}/ncs.crypto_keys`, and read using the `${NCS_DIR}/bin/ncs_crypto_keys` external command as configured in `${NCS_CONFIG_DIR}/ncs.conf`. Example system installation `ncs.conf:`
-
- ```xml
-
-
-
- ${NCS_DIR}/bin/ncs_crypto_keys
- ${NCS_CONFIG_DIR}/ncs.crypto_keys
-
-
-
- ```
-
- Example system installation`ncs.crypto_keys` file (do not reuse):
-
- ```
- AESCFB128_KEY=40f7c3b5222c1458be3411cdc0899fg
- AES256CFB128_KEY=5a08b6d78b1ce768c67e13e76f88d8af7f3d925ce5bfedf7e3169de6270bb6eg
- ```
-
- For details on using a custom external command to read the encryption keys, see [Encrypted Strings](../../development/connected-topics/encryption-keys.md).
-
-You can generate a new set of keys, e.g. for use within the `ncs.crypto_keys` file, with the following command (requires `openssl` to be present):
-
-```sh
-#!/bin/sh
-cat <
-
-
- 0
-
- 0123456789abcdef0123456789abcdeg
-
-
- 3c687d564e250ad987198d179537af563341357493ed2242ef3b16a881dd608g
-
-
-
- 1
-
- 0123456789abcdef0123456789abcdeh
-
-
- 3c687d564e250ad987198d179537af563341357493ed2242ef3b16a881dd608h
-
-
-
-
-```
-
-External keys that can be rotated must be provided with the initial line `EXTERNAL_KEY_FORMAT=2` and the `generation` within square brackets. Example (do not reuse):
-
-```
-EXTERNAL_KEY_FORMAT=2
-AESCFB128_KEY[0]=0123456789abcdef0123456789abcdeg
-AES256CFB128_KEY[0]=3c687d564e250ad987198d179537af563341357493ed2242ef3b16a881dd608g
-AESCFB128_KEY[1]=0123456789abcdef0123456789abcdeh
-AES256CFB128_KEY[1]=3c687d564e250ad987198d179537af563341357493ed2242ef3b16a881dd608h
-```
-
-There is always an active generation:
-
-* Active generation is the generation in the set of keys currently used to encrypt and decrypt all leafs with an encrypted string type.
-* The active generation is persisted.
-* If using the legacy method of providing keys in `ncs.conf` or when providing keys using the `/ncs-config/encrypted-strings/key-rotation` method without providing the initial line `EXTERNAL_KEY_FORMAT=2` in the application, the active generation will be `-1`.
-* If starting NSO without any previous keys using the `/ncs-config/encrypted-strings/key-rotation` method or the `external-keys` method with the initial line `EXTERNAL_KEY_FORMAT=2`, the highest provided generation will be selected as the active generation.
-
-For `ncs.conf` details, see the [ncs.conf(5) man page](../../resources/man/ncs.conf.5.md) under `/ncs-config/encrypted-strings`.
-
-## Key Rotation
-
-Rotating cryptographic keys means replacing an old cryptographic key with a new one while maintaining the functionality of the encryption and decryption of encrypted string values in NSO. It is a standard practice in cryptography and key management to enhance security and mitigate risks associated with key exposure or compromise.\
-Key rotation helps ensure that sensitive data remains secure over time. It reduces the impact of potential key compromise and adheres to best practices for cryptographic hygiene. Key benefits:
-
-* If a cryptographic key is compromised, rotating it reduces the amount of data exposed to the attacker since previously encrypted values can be re-encrypted with a new key.
-* Regular rotation minimizes the time a single key is in use, thereby reducing the potential damage an attacker could do if they gain access to it.
-* Reusing the same key for a prolonged period increases the risk of data correlation attacks (e.g., frequency analysis). Rotation ensures unique keys are used for encrypting strings, reducing this risk.
-* Regularly rotating keys helps organizations maintain and test their key management processes. This ensures the system is prepared to handle key management tasks effectively in an emergency.
-
-To rotate to a new generation of keys and re-encrypt the data:
-
-1. Always [take a backup](../management/system-management/#backup-and-restore) using [ncs-backup](../../resources/man/ncs-backup.1.md).
-2. Check the currently active generation using the `/key-rotation/get-active-generation` action.
-3. Re-encrypt all encrypted values with a new set of keys using the `/key-rotation/apply-new-key` action with the `new-key-generation` to rotate to as input.\
- The commit queue must be empty before running the action, or the action will fail, as the snapshot database is re-initialized. To wait for the commit queue to become empty, use the `wait-commit-queue` argument with the number of seconds to wait before failing.
-
-CLI example:
-
-```
-$ ${NCS_DIR}/bin/ncs-backup
-$ ncs_cli -Cu admin
-# key-rotation get-active-generation
-active-generation -1
-# key-rotation apply-new-keys new-key-generation 0 wait-commit-queue 10
-result true
-new-active-key-generation 0
-```
-
-The data in CDB that is subject to re-encryption when executing the `/key-rotation/apply-new-key` action:
-
-* Encrypted types.
-* Unions of encrypted types.
-* Service metadata (original attribute, reverse and forward diff set).
-* NED secrets.
-* Rollback files.
-* History log.
-
-Under the hood, the`/key-rotation/apply-new-keys` action, when executed, performs the following steps:
-
-1. Starts an upgrade transaction that will be used when re-encrypting the datastore.
-2. Load the new active cryptographic keys into CDB and persist them.
-3. Sync HA.
-4. Re-encrypt data.
-5. Drops the CDB snapshot database.
-6. Commits data.
-7. Restart NSO VMs.
-8. End upgrade.
-
-## Reloading After Changes to the Cryptographic Keys
-
-1. Before changing the cryptographic keys, always [take a backup](../management/system-management/#backup-and-restore) using [ncs-backup](../../resources/man/ncs-backup.1.md). Also, back up the external key file, default `${NCS_CONFIG_DIR}/ncs.crypto_keys`, or the `${NCS_CONFIG_DIR}/ncs.conf` file, depending on where the keys are stored.
-2. Suppose you have previously provided keys in the legacy format and wish to switch to `/ncs-config/encrypted-strings/key-rotation` or `external-keys` with the initial line `EXTERNAL_KEY_FORMAT=2`. In that case, you must provide the currently used keys as generation `-1`. The new keys can have any non-negative generation number.
-3. Replace the external key file or `ncs.conf` file depending on where the keys are stored.
-4. Issue `ncs --reload` to reload the cryptographic keys.
-5. Ensure commit queues are empty or wait for them to become empty.
-6. Execute`/key-rotation/apply-new-keys` action to change the active generation, for example, from `-1` to `new-key-generation 0` as shown in the CLI example above.
-
-{% hint style="info" %}
-In a high-availability setting, keys must be identical on all nodes before attempting key rotation. Otherwise, the action will abort. The node executing the action will initiate the key reload for all nodes.
-{% endhint %}
-
-## Migrating 3DES Encrypted Values
-
-NSO 6.5 removed support for 3DES encryption since the algorithm is no longer deemed sufficiently secure. If you are migrating from an older version and you have data using the `tailf:des3-cbc-encrypted-string` YANG type, NSO will no longer be able to read this data. In fact, compiling a YANG module using this type will produce an error.
-
-To avoid losing data when upgrading to NSO 6.5 or later, you must first update all the YANG data models and change the `tailf:des3-cbc-encrypted-string` type to either `tailf:aes-cfb-128-encrypted-string` or `tailf:aes-256-cfb-128-encrypted-string`. Compile the updated models and then perform a package upgrade for the affected packages.
-
-While upgrading the packages, the automatic CDB schema upgrade will re-encrypt the data in the new (AES) format. At this point you are ready to upgrade to the new NSO version that no longer supports 3DES.
diff --git a/administration/advanced-topics/ipc-connection.md b/administration/advanced-topics/ipc-connection.md
deleted file mode 100644
index 69eec231..00000000
--- a/administration/advanced-topics/ipc-connection.md
+++ /dev/null
@@ -1,35 +0,0 @@
----
-description: Connect client libraries to NSO with IPC.
----
-
-# IPC Connection
-
-Client libraries connect to NSO for inter-process communication (IPC) using TCP or Unix domain sockets.
-
-If NSO is configured to use TCP sockets for IPC, you can tell NSO which address to use for these connections through the `/ncs-config/ncs-ipc-address/ip` (default value 127.0.0.1) and `/ncs-config/ncs-ipc-address/port` (default value 4569) elements in `ncs.conf`. If you change these values, you will likely need to configure the clients accordingly. Note that these values have security implications; see [Security Issues](../installation-and-deployment/deployment/secure-deployment.md#securing-ipc-access). In particular, changing the address away from 127.0.0.1 may allow unauthenticated remote connections.
-
-Many of the clients read the environment variables `NCS_IPC_ADDR` and `NCS_IPC_PORT` to determine if something other than the default is to be used, but others might need source code changes. This is a list of clients that communicate with NSO and what needs to be done when `ncs-ipc-address` is changed.
-
-
Client
Changes required
Remote commands via the ncs command
Remote commands, such as ncs --reload, check the environment variables NCS_IPC_ADDR and NCS_IPC_PORT.
CLI tools
The Command Line Interface (CLI) client ncs_cli and similar commands, such as ncs_cmd and ncs_load, check the environment variables NCS_IPC_ADDR and NCS_IPC_PORT. Alternatively, many of them also support command-line options.
CDB and MAAPI clients
The address supplied to Cdb.connect() and Maapi.connect() must be changed.
Data provider API clients
The address supplied to Dp constructor socket must be changed.
Notification API clients
The new address must be supplied to the socket for the Nofif constructor.
-
-Likewise, if NSO is configured to use Unix domain sockets for IPC and you have changed the path under `/ncs-config/ncs-local-ipc/path` in `ncs.conf`, you can tell clients to use the new path through the `NCS_IPC_PATH` environment variable. Clients must also have filesystem permission to access the IPC path, or they will not be able to communicate with the NSO daemon process.
-
-To run more than one instance of NSO on the same host (which can be useful in development scenarios), each instance needs its own IPC socket. If using TCP for IPC, set `/ncs-config/ncs-ipc-address/port` in `ncs.conf` to different values for each instance. If, instead, you are using Unix sockets for IPC, set `/ncs-config/ncs-local-ipc/path` in `ncs.conf` to different values. In either case, you may also need to change the NETCONF and CLI over SSH ports under `/ncs-config/netconf/transport` and `/ncs-config/cli/ssh` by either disabling them or changing their values.
-
-## Restricting Access to the IPC Socket
-
-By default, clients connecting to the IPC socket are considered trusted, i.e., there is no authentication required, as the system relies on the use of 127.0.0.1 for `/ncs-config/ncs-ipc-address/ip` or Unix domain sockets to prevent remote access. In case this is not sufficient, such as when untrusted users have shell access on the system where NSO runs, it is possible to further restrict the access to the IPC socket.
-
-If Unix domain sockets are used, you can leverage Unix filesystem permissions for the socket path to limit which OS users and groups can initiate connections to the socket. NSO may also perform additional authentication of the connecting users; see [Authenticating IPC Access](../management/aaa-infrastructure.md#authenticating-ipc-access).
-
-For TCP sockets, you can enable an access check by setting the `ncs.conf` element `/ncs-config/ncs-ipc-access-check/enabled` to `true`, and specifying a filename for `/ncs-config/ncs-ipc-access-check/filename`. The file should contain a shared secret, i.e., a random (printable ASCII) character string. Clients connecting to the IPC socket will then be required to prove that they have knowledge of the secret through a challenge handshake before they are allowed access to the NSO functions provided via the IPC socket.
-
-{% hint style="info" %}
-The access permissions on this file must be restricted via OS file permissions, such that it can only be read by the NSO daemon and client processes that are allowed to connect to the IPC port. E.g. if both the daemon and the clients run as root, the file can be owned by root and have only "read by owner" permission (i.e. mode 0400). Another possibility is to have a group that only the daemon and the clients belong to, set the group ID of the file to that group, and have only "read by group" permission (i.e. mode 040).
-{% endhint %}
-
-To provide the secret to the client libraries and inform them that they need to use the access check handshake, you have to set the environment variable `NCS_IPC_ACCESS_FILE` to the full pathname of the file containing the secret. This is sufficient for all the clients mentioned above, i.e., there is no need to change the application code to support or enable this check.
-
-{% hint style="info" %}
-The access check must be either enabled or disabled for both the daemon and the clients. E.g., if `/ncs-config/ncs-ipc-access-check/enabled` in `ncs.conf` is not set to `true` but clients are started with the environment variable `NCS_IPC_ACCESS_FILE` pointing to a file with a secret, the client connections will fail.
-{% endhint %}
diff --git a/administration/advanced-topics/ipv6-on-northbound-interfaces.md b/administration/advanced-topics/ipv6-on-northbound-interfaces.md
deleted file mode 100644
index c589bb0a..00000000
--- a/administration/advanced-topics/ipv6-on-northbound-interfaces.md
+++ /dev/null
@@ -1,38 +0,0 @@
----
-description: Learn about using IPv6 on NSO's northbound interfaces.
----
-
-# IPv6 on Northbound Interfaces
-
-NSO supports access to all northbound interfaces via IPv6, and in the most simple case, i.e., IPv6-only access, this is just a matter of configuring an IPv6 address (typically the wildcard address `::`) instead of IPv4 for the respective agents and transports in `ncs.conf`, e.g., `/ncs-config/cli/ssh/ip` for SSH connections to the CLI or `/ncs-config/netconf-north-bound/transport/ssh/ip` for SSH to the NETCONF agent. The SNMP agent configuration is configured via one of the other northbound interfaces rather than via `ncs.conf`, see [NSO SNMP Agent](../../development/core-concepts/northbound-apis/#the-nso-snmp-agent) in Northbound APIs. For example, via the CLI, we would set `snmp agent ip` to the desired address. All these addresses default to the IPv4 wildcard address `0.0.0.0`.
-
-In most IPv6 deployments, it will, however, be necessary to support IPv6 and IPv4 access simultaneously. This requires that both IPv4 and IPv6 addresses are configured, typically `0.0.0.0` plus `::`. To support this, there is in addition to the `ip` and `port` leafs also a list `extra-listen` for each agent and transport, where additional IP addresses and port pairs can be configured. Thus, to configure the CLI to accept SSH connections to port 2024 on any local IPv6 address, in addition to the default (port 2024 on any local IPv4 address), we can add an `` section under `/ncs-config/cli/ssh` in `ncs.conf`:
-
-```xml
-
- true
-
-
-
- true
- 0.0.0.0
- 2024
-
-
- ::
- 2024
-
-
-
-
- ...
-
-```
-
-To configure the SNMP agent to accept requests to port 161 on any local IPv6 address, we could similarly use the CLI and give the command:
-
-```bash
-admin@ncs(config)# snmp agent extra-listen :: 161
-```
-
-The `extra-listen` list can take any number of address/port pairs; thus, this method can also be used when we want to accept connections/requests on several specified (IPv4 and/or IPv6) addresses instead of the wildcard address or when we want to use multiple ports.
diff --git a/administration/advanced-topics/layered-service-architecture.md b/administration/advanced-topics/layered-service-architecture.md
deleted file mode 100644
index 1bc78328..00000000
--- a/administration/advanced-topics/layered-service-architecture.md
+++ /dev/null
@@ -1,1217 +0,0 @@
----
-description: Design large and scalable NSO applications using LSA.
----
-
-# Layered Service Architecture
-
-Layered Service Architecture (LSA) is a design approach for massively large and scalable NSO applications. Large service providers and enterprises can use it to manage services for millions of users, ranging over several hundred thousand managed devices. Such scale requires special consideration since a single NSO instance no longer suffices and LSA helps you address this challenge.
-
-## Going Big
-
-At some point, scaling up hits the law of diminishing returns. Effectively, adding more resources to the NSO server becomes prohibitively expensive. To further increase the throughput of the whole system, you can share the load across multiple instances, in a scale-out fashion.
-
-You achieve this by splitting a service into a main, upper-layer part, and one or more lower-layer parts. The upper part controls and dispatches work to the lower parts. This is the same approach as using a customer-facing service (CFS) and a resource-facing service (RFS). However, here the CFS code (the upper-layer part) runs in a different NSO node than the RFS code (the lower-layer parts). What is more, the lower-layer parts can be spread across multiple NSO nodes.
-
-Each RFS node is responsible for its own set of managed devices, mounted under its `/devices` tree, and the upper-layer, CFS node only concerns itself with the RFS nodes. So, the CFS node only mounts the RFS nodes under its `/devices` tree, not managed devices directly. The main advantage of this architecture is that you can add many device RFS nodes that collectively manage a huge number of actual devices—much more than a single node could.
-
-
Layered CFS/RFS architecture
-
-## Is LSA for Me?
-
-While it is tempting to design the system in the most scalable way from the start, it comes with a cost. Compared to a single, non-LSA setup, the automation system now becomes distributed across multiple nodes, with all the complexity that entails. For example, in a non-distributed system, the communication between different parts has mostly negligible latency and hardly ever fails. That is certainly not true anymore for distributed systems as we know them today, including LSA.
-
-More practically, taking a service in NSO and deploying a single instance on an LSA system is likely to take longer and have a higher chance of failure compared to a non-LSA system, because additional network communication is involved.
-
-Moreover, multiple NSO nodes present a higher operational complexity and administrative burden. There is no longer a “single pane of glass” view of all the individual devices. That's why you must weigh the benefits of the LSA approach against the scale at which you operate. When LSA starts making sense will depend on the type of devices you manage, the services you have, the geographical distribution of resources, and so on.
-
-A distributed system can push the overall throughput way beyond what a single instance can do. But you will achieve a much better outcome by first focusing on eliminating the bottlenecks in the provisioning code, as discussed in [Scaling and Performance Optimization](../../development/advanced-development/scaling-and-performance-optimization.md). Only when that proves insufficient, consider deploying LSA.
-
-LSA also addresses the memory limitations of NSO when device configurations become very large (individually or all together). If the NSO server is memory-constrained and more memory cannot be added, the LSA approach can be a solution.
-
-Another challenge that LSA may help you overcome is scaling organizationally. When many teams share the same NSO instance, it can get hard to separate the different concerns and responsibilities. Teams may also have different cadences or preferences for upgrades, resulting in friction. With LSA, it becomes possible to create a clearer separation. The CFS node and the RFS nodes can have different release cycles (as long as the YANG upgrade rules are followed) and each can be upgraded independently. If a bug is found or a feature is missing in the RFS nodes, it can be fixed without affecting the CFS node, and vice versa.
-
-To summarize, the major advantage of this architecture is scalability. The solution scales horizontally, both at the upper and the lower layer, thus catering for truly massive deployments, but at the expense of the increased complexity.
-
-## Layered Service Design
-
-To take advantage of the scalability potential of LSA, your services must be designed in a layered fashion. Once the automation logic in NSO reaches a certain level of complexity, a stacked service design tends to emerge naturally. Often, you can extend it to LSA with relatively little change. The same is true for brand-new, green field designs.
-
-In other situations, you might need to invest some additional effort to split and orchestrate the work across multiple groups of devices. Examples are existing monolithic services or stacked service designs that require all RFSs to access all devices.
-
-### New, Greenfield Design
-
-If you are designing the service from scratch, you have the most freedom in choosing the partitioning of logic between CFS and RFS. The CFS must contain the YANG definition for the service and its configurable options that are available to the customer, perhaps through an order capture system north of the NSO. On the other hand, the RFS YANG models are internal to the service, that is, they are not used directly by the customer. So, you are free to design them in a way that makes the provisioning code as simple as possible.
-
-As an example, you might have a VLAN provisioning service where the CFS lets users select if the hosts on the VLAN can access the internet. Then you can divide provisioning into, let's say, an RFS service that configures the VLAN and the appropriate IP subnet across the data center switches, and another RFS service that configures the firewall to allow the traffic from the subnet to reach the internet. This design clearly separates the provisioned devices into two groups: firewalls and data center switches. Each group can be managed by a separate lower-layer NSO.
-
-### Existing Monolithic Application with Stacked Services
-
-Similar to a brand new design, an existing monolithic application that uses stacked services has already laid the groundwork for LSA-compatible design because of the existing division into two layers (upper and lower).
-
-A possible complication, in this case, is when each existing RFS touches all of the affected devices, and that makes it hard to partition devices across multiple lower-layer NSO nodes. For example, if one RFS manages the VLAN interface (the VLAN ID and layer 2 settings) and another RFS manages the IP configuration for this interface, that configuration very likely happens on the same devices. The solution in this situation could be to partition RFS services based on the data center that they operate in, such as one lower-layer NSO node for one data center, another lower-layer NSO for another data center, and so on. If that is not possible, an alternative is to redesign each RFS and split their responsibilities differently.
-
-#### Existing Monolithic Application
-
-The most complex, yet common case is when a single node NSO installation grows over time and you are faced with performance problems due to the new size. To leverage the LSA functionality, you must first split the service into upper- and lower-layer parts, which require a certain amount of effort. That is why the decision to use LSA should always be accompanied by a thorough analysis to determine what makes the system too slow. Sometimes, it is a result of a bad "must" expression in the service YANG code or similar. Fixing that is much easier than re-architecting the application.
-
-### Orchestrating the Work
-
-Regardless of whether you start with a green field design or extend an existing application, you must tackle the problem of dispatching the RFS instantiation to the correct lower-layer NSO node.
-
-Imagine a VPN application that uses a managed device on each site to securely connect to the private network. In a service provider network, this is usually done by the CPE. When a customer orders connectivity to an additional site (another leg of the VPN), the service needs to configure the site-local device (the CPE). As there will be potentially many such devices, each will be managed by one of the RFS nodes. However, the VPN service is managed centrally, through the CFS, which must:
-
-* Figure out which RFS node is responsible for the device for the new site (CPE).
-* Dispatch the RFS instantiation to that particular RFS node, making sure the device is properly configured.
-
-NSO provides a mechanism to facilitate the second part, the actual dispatch, but the service logic must somehow select the correct RFS node. If the RFS nodes are geographically separated across different countries or different data centers, the CFS could simply infer or calculate the right RFS node based on service instance parameters, such as the physical location of the new site.
-
-A more flexible alternative is to use dynamic mapping. It can be as simple as a list of 2-tuples that map a device name to an RFS node, stored in the CDB. The trade-off is that the list must be maintained. It is straightforward to automate the maintenance of the list though, for example through NETCONF notifications whenever `/devices/device` on the RFS nodes is manipulated or by explicitly asking the CFS node to query the RFS nodes for their list of devices.
-
-Ultimately, the right approach to dispatch will depend on the complexity of your service and operational procedures.
-
-### Provisioning of an LSA Service Request
-
-Having designed a layered service with the CFS and RFS parts, the CFS must now communicate with the RFS that resides on a different node. You achieve that by adding the lower-layer (RFS) node as a managed device to the upper-layer (CFS) node. The CFS node must access the RFS data model on the lower-layer node, just like it accesses any other configuration on any managed device. But don't you need a NED to do this? Indeed, you do. That's why the RFS model needs to be specially compiled for the upper-layer node to use as part of NED and not a standalone service. A model compiled in this way is called a 'device compiled'.
-
-Let's then see how the LSA setup affects the whole service provisioning process. Suppose a new request arrives at the CFS node, such as a new service instance being created through RESTCONF by a customer order portal. The CFS runs the service mapping logic as usual; however, instead of configuring the network devices directly, the CFS configures the appropriate RFS nodes with the generated RFS service instance data. This is the dispatch logic in action.
-
-
LSA Request Flow
-
-As the configuration for the lower-layer nodes happens under the `/devices/device` tree, it is picked up and pushed to the relevant NSO instances by the NED. The NED sends the appropriate NETCONF edit-config RPCs, which trigger the RFS FASTMAP code at the RFS nodes. The RFS mapping logic constructs the necessary network configuration for each RFS instance and the RFS nodes update the actual network devices.
-
-In case the commit queue feature is not being used, this entire sequence is serialized through the system as a whole. It means that if another northbound request arrives at the CFS node while the first request is being processed, the second request is synchronously queued at the CFS node, waiting for the currently running transaction to either succeed or fail.
-
-If the code on the RFS nodes is reactive, it will likely return without much waiting, since the RFM applications are usually very fast during their first round of execution. But that will still have a lower performance than using the commit queue since the execution is serialized eventually when modifying devices. To maximize throughput, you also need to enable the commit queue functionality throughout the system.
-
-### Implementation Considerations
-
-The main benefit of LSA is that it scales horizontally at the RFS node layer. If one RFS node starts to become overloaded, it's easy to bring up an additional one, to share the load. Thus LSA caters to scalability at the level of the number of managed devices. However, each RFS node needs to host all the RFSs that touch the devices it manages under its `/devices/device` tree. There is still one, and only one, NSO node that directly manages a single device.
-
-Dividing a provisioning application into upper and lower-layer services also increases the complexity of the application itself. For example, to follow the execution of a reactive or nano RFS, typically an additional NETCONF notification code must be written. The notifications have to be sent from the RFS nodes and received and processed by the CFS code. This way, if something goes wrong at the device layer, the information is relayed all the way to the top level of the system.
-
-Furthermore, it is highly recommended that LSA applications enable the commit queue on all NSO nodes. If the commit queue is not enabled, the slowest device on the network will limit the overall throughput, significantly reducing the benefits of LSA.
-
-Finally, if the two-layer approach proves to be insufficient due to requirements at the CFS node, you can extend it to three layers, with an additional layer of NSO nodes between the CFS and RFS layers.
-
-## LSA Examples
-
-### Greenfield LSA Application
-
-This section describes a small LSA application, which exists as a running example in the [examples.ncs/layered-services-architecture/lsa-single-version-deployment](https://github.com/NSO-developer/nso-examples/tree/6.6/layered-services-architecture/lsa-single-version-deployment) directory.
-
-The application is a slight variation on the [examples.ncs/service-management/rfs-service](https://github.com/NSO-developer/nso-examples/tree/6.6/service-management/rfs-service) example where the YANG code has been split up into an upper-layer and a lower-layer implementation. The example topology (based on netsim for the managed devices, and NSO for the upper/lower layer NSO instances) looks like the following:
-
-
Example LSA architecture
-
-The upper layer of the YANG service data for this example looks like the following:
-
-```yang
-module cfs-vlan {
- ...
- list cfs-vlan {
- key name;
- leaf name {
- type string;
- }
-
- uses ncs:service-data;
- ncs:servicepoint cfs-vlan;
-
- leaf a-router {
- type leafref {
- path "/dispatch-map/router";
- }
- mandatory true;
- }
- leaf z-router {
- type leafref {
- path "/dispatch-map/router";
- }
- mandatory true;
- }
- leaf iface {
- type string;
- mandatory true;
- }
- leaf unit {
- type int32;
- mandatory true;
- }
- leaf vid {
- type uint16;
- mandatory true;
- }
- }
-}
-```
-
-Instantiating one CFS we have:
-
-```
-admin@upper-nso% show cfs-vlan
-cfs-vlan v1 {
- a-router ex0;
- z-router ex5;
- iface eth3;
- unit 3;
- vid 77;
-}
-```
-
-The provisioning code for this CFS has to make a decision on where to instantiate what. In this example the "what" is trivial, it's the accompanying RFS, whereas the "where" is more involved. The two underlying RFS nodes, each manage 3 netsim routers, thus given the input, the CFS code must be able to determine which RFS node to choose. In this example, we have chosen to have an explicit map, thus on the `upper-nso` we also have:
-
-```
-admin@upper-nso% show dispatch-map
-dispatch-map ex0 {
- rfs-node lower-nso-1;
-}
-dispatch-map ex1 {
- rfs-node lower-nso-1;
-}
-dispatch-map ex2 {
- rfs-node lower-nso-1;
-}
-dispatch-map ex3 {
- rfs-node lower-nso-2;
-}
-dispatch-map ex4 {
- rfs-node lower-nso-2;
-}
-dispatch-map ex5 {
- rfs-node lower-nso-2;
-}
-```
-
-So, we have a template CFS code that does the dispatching to the right RFS node.
-
-```xml
-
-
-
-
-
-
- {string(deref(current())/../rfs-node)}
-
-
-
- {string(/name)}
-
- {current()}
- {/iface}
- {/unit}
- {/vid}
- Interface owned by CFS: {/name}
-
-
-
-
-
-
-```
-
-This technique for dispatching is simple and easy to understand. The dispatching might be more complex, it might even be determined at execution time dependent on CPU load. It might be (as in this example) inferred from input parameters or it might be computed.
-
-The result of the template-based service is to instantiate the RFS, at the RFS nodes.
-
-First, let's have a look at what happened in the upper-nso. Look at the modifications but ignore the fact that this is an LSA service:
-
-```
-admin@upper-nso% request cfs-vlan v1 get-modifications no-lsa
-cli {
- local-node {
- data devices {
- device lower-nso-1 {
- config {
- + rfs-vlan:vlan v1 {
- + router ex0;
- + iface eth3;
- + unit 3;
- + vid 77;
- + description "Interface owned by CFS: v1";
- + }
- }
- }
- device lower-nso-2 {
- config {
- + rfs-vlan:vlan v1 {
- + router ex5;
- + iface eth3;
- + unit 3;
- + vid 77;
- + description "Interface owned by CFS: v1";
- + }
- }
- }
- }
- }
-}
-```
-
-Just the dispatched data is shown. As `ex0` and `ex5` reside on different nodes, the service instance data has to be sent to both `lower-nso-1` and `lower-nso-2`.
-
-Now let's see what happened in the `lower-nso`. Look at the modifications and take into account that these are LSA nodes (this is the default):
-
-```
-admin@upper-nso% request cfs-vlan v1 get-modifications
-cli {
- local-node {
- .....
- }
- lsa-service {
- service-id /devices/device[name='lower-nso-1']/config/rfs-vlan:vlan[name='v1']
- data devices {
- device ex0 {
- config {
- r:sys {
- interfaces {
- + interface eth3 {
- + enabled;
- + unit 3 {
- + enabled;
- + description "Interface owned by CFS: v1";
- + vlan-id 77;
- + }
- + }
- }
- }
- }
- }
- }
- }
- lsa-service {
- service-id /devices/device[name='lower-nso-2']/config/rfs-vlan:vlan[name='v1']
- data devices {
- device ex5 {
- config {
- r:sys {
- interfaces {
- + interface eth3 {
- + enabled;
- + unit 3 {
- + enabled;
- + description "Interface owned by CFS: v1";
- + vlan-id 77;
- + }
- + }
- }
- }
- }
- }
- }
- }
-```
-
-Both the dispatched data and the modification of the remote service are shown. As `ex0` and `ex5` reside on different nodes, the service modifications of the service `rfs-vlan` on both `lower-nso-1` and `lower-nso-2` are shown.
-
-The communication between the NSO nodes is of course NETCONF.
-
-```
-admin@upper-nso% set cfs-vlan v1 a-router ex0 z-router ex5 iface eth3 unit 3 vid 78
-[ok][2016-10-20 16:52:45]
-
-[edit]
-admin@upper-nso% commit dry-run outformat native
-native {
- device {
- name lower-nso-1
- data
-
-
-
-
- test-then-set
- rollback-on-error
-
-
-
- v1
- 78
-
- -1
-
-
-
-
-
- }
- ...........
- ....
-```
-
-The YANG model at the lower layer, also known as the RFS layer, is similar to the CFS, but slightly different:
-
-```yang
-module rfs-vlan {
-
- ...
-
- list vlan {
- key name;
- leaf name {
- tailf:cli-allow-range;
- type string;
- }
-
- uses ncs:service-data;
- ncs:servicepoint "rfs-vlan";
-
- leaf router {
- type string;
- }
- leaf iface {
- type string;
- mandatory true;
- }
- leaf unit {
- type int32;
- mandatory true;
- }
- leaf vid {
- type uint16;
- mandatory true;
- }
- leaf description {
- type string;
- mandatory true;
- }
- }
-}
-```
-
-The task for the RFS provisioning code here is to actually provision the designated router. If we log into one of the lower layer NSO nodes, we can check the following.
-
-```
-admin@lower-nso-1> show configuration vlan
-vlan v1 {
- router ex0;
- iface eth3;
- unit 3;
- vid 77;
- description "Interface owned by CFS: v1";
-}
-[ok][2016-10-20 17:01:08]
-admin@lower-nso-1> request vlan v1 get-modifications
-cli {
- local-node {
- data devices {
- device ex0 {
- config {
- r:sys {
- interfaces {
- + interface eth3 {
- + enabled;
- + unit 3 {
- + enabled;
- + description "Interface owned by CFS: v1";
- + vlan-id 77;
- + }
- + }
- }
- }
- }
- }
- }
- }
-}
-```
-
-To conclude this section, the final remark here is that to design a good LSA application, the trick is to identify a good layering for the service data models. The upper layer, the CFS layer is what is exposed northbound, and thus requires a model that is as forward-looking as possible since that model is what a system north of NSO integrates to, whereas the lower layer models, the RFS models can be viewed as "internal system models" and they can be more easily changed.
-
-### Greenfield LSA Application Designed for Easy Scaling
-
-In this section, we'll describe a lightly modified version of the example in the previous section. The application we describe here exists as a running example under [examples.ncs/layered-services-architecture/lsa-scaling](https://github.com/NSO-developer/nso-examples/tree/6.6/layered-services-architecture/lsa-scaling).
-
-Sometimes it is desirable to be able to easily move devices from one lower LSA node to another. This makes it possible to easily expand or shrink the number of lower LSA nodes. Additionally, it is sometimes desirable to avoid HA pairs for replication but instead use a common store for all lower LSA devices, such as a distributed database, or a common file system.
-
-The above is possible provided that the LSA application is structured in certain ways.
-
-* The lower LSA nodes only expose services that manipulate the configuration of a single device. We call these devices RFSs, or dRFS for short.
-* All services are located in a way that makes it easy to extract them, for example in /drfs:dRFS/device
-
- ```yang
- container dRFS {
- list device {
- key name;
- leaf name {
- type string;
- }
- }
- }
- ```
-* No RFS takes place on the lower LSA nodes. This avoids the complication with locking and distributed event handling.
-* The LSA nodes need to be set up with the proper NEDs and with auth groups such that a device can be moved without having to install new NEDs or update auth groups.
-
-Provided that the above requirements are met, it is possible to move a device from one lower LSA node by extracting the configuration from the source node and installing it on the target node. This, of course, requires that the source node is still alive, which is normally the case when HA-pairs are used.
-
-An alternative to using HA-pairs for the lower LSA nodes is to extract the device configuration after each modification to the device and store it in some central storage. This would not be recommended when high throughput is required but may make sense in certain cases.
-
-In the example application, there are two packages on the lower LSA nodes that provide this functionality. The package `inventory-updater` installs a database subscriber that is invoked every time any device configuration is modified, both in the preparation phase and in the commit phase of any such transaction. It extracts the device and dRFS configuration, including service metadata, during the preparation phase. If the transaction proceeds to a full commit, the package is again invoked and the extracted configuration is stored in a file in the directory `db_store`.
-
-The other package is called `device-actions`. It provides three actions: `extract-device`, `install-device`, and `delete-device`. They are intended to be used by the upper LSA node when moving a device either from a lower LSA node or from `db_store`.
-
-In the upper LSA node, there is one package for coordinating the movement, called `move-device`. It provides an action for moving a device from one lower LSA node to another. For example when invoked to move device `ex0` from `lower-1` to `lower-2` using the action
-
-```cli
-request move-device move src-nso lower-1 dest-nso lower-2 device-name ex0
-```
-
-it goes through the following steps:
-
-* A partial lock is acquired on the upper-nso for the path `/devices/device[name=lower-1]/config/dRFS/device[name=ex0]` to avoid any changes to the device while the device is in the process of being moved.
-* The device and dRFS configuration are extracted in one of two ways:
-
- * Read the configuration from `lower-1` using the action
-
- ```cli
- request device-action extract-device name ex0
- ```
- * Read the configuration from some central store, in our case the file system in the directory. `db_store`.
-
- The configuration will look something like this
-
- ```
- devices {
- device ex0 {
- address 127.0.0.1;
- port 12022;
- ssh {
- ...
- /* Refcount: 1 */
- /* Backpointer: [ /drfs:dRFS/drfs:device[drfs:name='ex0']/rfs-vlan:vlan[rfs-vlan:name='v1'] ] */
- interface eth3 {
- ...
- }
- ...
- }
- }
- dRFS {
- device ex0 {
- vlan v1 {
- private {
- ...
- }
- }
- }
- }
- ```
-* Install the configuration on the `lower-2` node. This can be done by running the action:
-
- ```cli
- request device-action install-device name ex0 config
- ```
-
- This will load the configuration and commit using the flags `no-deploy` and `no-networking`.
-* Delete the device from `lower-1` by running the action
-
- ```cli
- request device-action delete-device name ex0
- ```
-* Update mapping table
-
- ```
- dispatch-map ex0 {
- rfs-node lower-nso-2;
- }
- ```
-* Release the partial lock for `/devices/device[name=lower-1]/config/dRFS/device[name=ex0]`.
-* Re-deploy all services that have touched the device. The services all have backpointers from `/devices/device{lower-1}/config/dRFS/device{ex0}`. They are `re-deployed` using the flags `no-lsa` and `no-networking`.
-* Finally, the action runs `compare-config` on `lower-1` and `lower-2`.
-
-With this infrastructure in place, it is fairly straightforward to implement actions for re-balancing devices among lower LSA nodes, as well as evacuating all devices from a given lower LSA node. The example contains implementations of those actions as well.
-
-### Re-architecting an Existing VPN Application for LSA
-
-If we do not have the luxury of designing our NSO service application from scratch, but rather are faced with extending/changing an existing, already deployed application into the LSA architecture we can use the techniques described in this section.
-
-Usually, the reasons for re-architecting an existing application are performance-related.
-
-In the NSO example collection, two popular examples are the [examples.ncs/service-management/mpls-vpn-java](https://github.com/NSO-developer/nso-examples/tree/6.6/service-management/mpls-vpn-java) and [examples.ncs/service-management/mpls-vpn-python](https://github.com/NSO-developer/nso-examples/tree/6.6/service-management/mpls-vpn-python) examples. Those example contains an almost "real" VPN provisioning example whereby VPNs are provisioned in a network of CPEs, PEs, and P routers according to this picture:
-
-
VPN network
-
-The service model in this example roughly looks like this:
-
-```yang
- list l3vpn {
- description "Layer3 VPN";
-
- key name;
- leaf name {
- type string;
- }
-
- leaf route-distinguisher {
- description "Route distinguisher/target identifier unique for the VPN";
- mandatory true;
- type uint32;
- }
-
- list endpoint {
- key "id";
- leaf id {
- type string;
- }
- leaf ce-device {
- mandatory true;
- type leafref {
- path "/ncs:devices/ncs:device/ncs:name";
- }
- }
-
- leaf ce-interface {
- mandatory true;
- type string;
- }
-
- ....
-
- leaf as-number {
- tailf:info "CE Router as-number";
- type uint32;
- }
- }
- container qos {
- leaf qos-policy {
- ......
-```
-
-There are several interesting observations on this model code related to the Layered Service Architecture.
-
-* Each instantiated service has a list of endpoints and CPE routers. These are modeled as a leafref into the /devices tree. This has to be changed if we wish to change this application into an LSA application since the /devices tree at the upper layer doesn't contain the actual managed routers. Instead, the /devices tree contains the lower layer RFS nodes.
-* There is no connectivity/topology information in the service model. Instead, the `mpls-vpn` example has topology information on the side, and that data is used by the provisioning code. That topology information for example contains data on which CE routers are directly connected to which PE router.
-
- Remember from the previous section, that one of the additional complications of an LSA application is the dispatching part. The dispatching problem fits well into the pattern where we have topology information stored on the side and let the provisioning FASTMAP code use that data to guide the provisioning. One straightforward way would be to augment the topology information with additional data, indicating which RFS node is used to manage a specific managed device.
-
-By far the easiest way to change an existing monolithic NSO application into the LSA architecture is to keep the service model at the upper layer and lower layer almost identical, only changing things like leafrefs directly into the /devices tree which obviously breaks.
-
-In this example, the topology information is stored in a separate container `share-data` and propagated to the LSA nodes by means of service code.
-
-The example [examples.ncs/layered-services-architecture/mpls-vpn-lsa](https://github.com/NSO-developer/nso-examples/tree/6.6/layered-services-architecture/mpls-vpn-lsa) example does exactly this, the upper layer data model in `upper-nso/packages/l3vpn/src/yang/l3vpn.yang` now looks as:
-
-```yang
- list l3vpn {
- description "Layer3 VPN";
-
- key name;
- leaf name {
- type string;
- }
-
- leaf route-distinguisher {
- description "Route distinguisher/target identifier unique for the VPN";
- mandatory true;
- type uint32;
- }
-
- list endpoint {
- key "id";
- leaf id {
- type string;
- }
- leaf ce-device {
- mandatory true;
- type string;
- }
- .......
-```
-
-The `ce-device` leaf is now just a regular string, not a leafref.
-
-So, instead of an NSO topology that looks like:
-
-
NSO topology
-
-\
-We want an NSO architecture that looks like this:
-
-
NSO LSA topology
-
-The task for the upper layer FastMap code is then to instantiate a copy of itself on the right lower layer NSO nodes. The upper layer FastMap code must:
-
-* Determine which routers, (CE, PE, or P) will be touched by its execution.
-* Look in its dispatch table, which lower-layer NSO nodes are used to host these routers.
-* Instantiate a copy of itself on those lower layer NSO nodes. One extremely efficient way to do that is to use the `Maapi.copyTree()` method. The code in the example contains code that looks like this:
-
- ```java
- public Properties create(
- ....
- NavuContainer lowerLayerNSO = ....
-
- Maapi maapi = service.context().getMaapi();
- int tHandle = service.context().getMaapiHandle();
- NavuNode dstVpn = lowerLayerNSO.container("config").
- container("l3vpn", "vpn").
- list("l3vpn").
- sharedCreate(serviceName);
- ConfPath dst = dstVpn.getConfPath();
- ConfPath src = service.getConfPath();
-
- maapi.copyTree(tHandle, true, src, dst);
- ```
-
-Finally, we must make a minor modification to the lower layer (RFS) provisioning code too. Originally, the FastMap code wrote all config for all routers participating in the VPN, now with the LSA partitioning, each lower layer NSO node is only responsible for the portion of the VPN that involves devices that reside in its /devices tree, thus the provisioning code must be changed to ignore devices that do not reside in the /devices tree.
-
-### Re-architecting Details
-
-In addition to conceptual changes of splitting into upper- and lower-layer parts, migrating an existing monolithic application to LSA may also impact the models used. In the new design, the upper-layer node contains the (more or less original) CFS model as well as the device-compiled RFS model, which it requires for communication with the RFS nodes. In a typical scenario, these are two separate models. So, for example, they must each use a unique namespace.
-
-To illustrate the different YANG files and namespaces used, the following text describes the process of splitting up an example monolithic service. Let's assume that the original service resides in a file, `myserv.yang`, and looks like the following:
-
-```yang
-module myserv {
-
- namespace "http://example.com/myserv";
- prefix ms;
-
- .....
-
- list srv {
- key name;
- leaf name {
- type string;
- }
-
- uses ncs:service-data;
- ncs:servicepoint vlanspnt;
-
- leaf router {
- type leafref {
- path "/ncs:devices/ncs:device/ncs:name";
- .....
- }
-}
-```
-
-In an LSA setting, we want to keep this module as close to the original as possible. We clearly want to keep the namespace, the prefix, and the structure of the YANG identical to the original. This is to not disturb any provisioning systems north of the original NSO. Thus with only minor modifications, we want to run this module at the CFS node, but with non-applicable leafrefs removed, thus at the CFS node we would get:
-
-```yang
-module myserv {
-
- namespace "http://example.com/myserv";
- prefix ms;
-
- .....
-
- list srv {
- key name;
- leaf name {
- type string;
- }
-
- uses ncs:service-data;
- ncs:servicepoint vlanspnt;
-
- leaf router {
- type string;
- .....
- }
-}
-```
-
-Now, we want to run almost the same YANG module at the RFS node, however, the namespace must be changed. For the sake of the CFS node, we're going to NED compile the RFS and NSO doesn't like the same namespace to occur twice, thus for the RFS node, we would get a YANG module `myserv-rfs.yang` that looks like the following:
-
-```yang
-module myserv-rfs {
-
- namespace "http://example.com/myserv-rfs";
- prefix ms-rfs;
-
- .....
-
- list srv {
- key name;
- leaf name {
- type string;
- }
-
- uses ncs:service-data;
- ncs:servicepoint vlanspnt;
-
- leaf router {
- type leafref {
- path "/ncs:devices/ncs:device/ncs:name";
- .....
- }
-}
-```
-
-This file can, and should, keep the leafref as is.
-
-The final and last file we get is the compiled NED, which should be loaded in the CFS node. The NED is directly compiled from the RFS model, as an LSA NED.
-
-```bash
-$ ncs-make-package --lsa-netconf-ned /path/to-rfs-yang myserv-rfs-ned
-```
-
-Thus, we end up with three distinct packages from the original one:
-
-1. The original, slated for the CFS node, with leafrefs removed.
-2. The modified original, slated for the RFS node, with the namespace and the prefix changed.
-3. The NED, compiled from the RFS node code, slated for the CFS node.
-
-## Deploying LSA
-
-The purpose of the upper CFS node is to manage all CFS services and to push the resulting service mappings to the RFS services. The lower RFS nodes are configured as devices in the device tree of the upper CFS node and the RFS services are created under the `/devices/device/config` accordingly. This is almost identical to the relation between a normal NSO node and the normal devices. However, there are differences when it comes to commit parameters and the commit queue, as well as some other LSA-specific features.
-
-Such a design allows you to decide whether you will run the same version of NSO on all nodes or not. Since some differences arise between the two options, this document distinguishes a single-version deployment from a multi-version one.
-
-Deployment of an LSA cluster where all the nodes have the same major version of NSO running is called a single version deployment. If the versions are different, then it is a multi-version deployment, since the packages on the CFS node must be managed differently.
-
-The choice between the two deployment options depends on your functional needs. The single version is easier to maintain and is a good starting point but is less flexible. While it is possible to migrate from one to the other, the migration from a single version to a multi-version is typically easier than the other way around. Still, every migration requires some effort, so it is best to pick one approach and stick to it.
-
-You can find working examples of both deployment types in the [examples.ncs/layered-services-architecture/lsa-single-version-deployment](https://github.com/NSO-developer/nso-examples/tree/6.6/layered-services-architecture/lsa-single-version-deployment) and [examples.ncs/layered-services-architecture/lsa-multi-version-deployment](https://github.com/NSO-developer/nso-examples/tree/6.6/layered-services-architecture/lsa-multi-version-deployment) folders, respectively.
-
-### RFS Nodes Setup
-
-The type of deployment does not affect the RFS nodes. In general, the RFS nodes act very much like ordinary standalone NSO instances but only support the RFS services.
-
-Configure and set up the lower RFS nodes as you would a standalone node, by making sure the necessary NED and RFS packages are loaded and the managed network devices added. This requires you to have already decided on the distribution of devices to lower RFS nodes. The RFS packages are ordinary service packages.
-
-The only LSA-specific requirement is that these nodes enable NETCONF communication northbound, as this is how the upper CFS node will interact with them. To enable NETCONF northbound, ensure that a configuration similar to the following is present in the `ncs.conf` of every RFS node:
-
-```xml
-
- true
-
-
- true
- 0.0.0.0
- 2022
-
-
-
-```
-
-One thing to note is that you do not need to explicitly enable the commit queue on the RFS nodes, even if you intend to use LSA with the commit queue feature. The upper CFS node is aware of the LSA setup and will propagate the relevant commit flags to the lower RFS nodes automatically.
-
-If you wish to enable the commit queue by default, that is, even for transactions originating on the RFS node (non-LSA), you are strongly encouraged to enable it globally, through the `/devices/global-settings/commit-queue/enabled-by-default` setting on all the RFS nodes and, importantly, the upper CFS node too. Otherwise, you may end up in a situation where only a part of the transaction runs through the commit queue. In that case, the `rollback-on-error` commit queue error option will not work correctly, as it can't roll back the full original transaction but just the part that went through the commit queue. This can result in an inconsistent network state.
-
-### CFS Node Setup
-
-Regardless of single or multi-version deployment, the upper CFS node has the lower RFS nodes configured as devices under the `/devices/device` tree. The CFS node communicates with these devices through NETCONF and must have the correct `ned-id` configured for each lower RFS node. The `ned-id` is set under `/devices/device/device-type/netconf/ned-id`, as for any NETCONF device.
-
-The part that is specific to LSA is the actual `ned-id` used. This has to be `ned:lsa-netconf` or a `ned-id` derived from it. What is more, the `ned-id` depends on the deployment type. For a single-version deployment, you can use the `lsa-netconf` value directly. This `ned-id` is built-in (defined in `tailf-ncs-ned.yang`) and available in NSO without any additional packages.
-
-So the configuration for the RFS device in the CFS node would look similar to:
-
-```cli
-admin@upper-nso% show devices device | display-level 4
-device lower-nso-1 {
- lsa-remote-node lower-nso-1;
- authgroup default;
- device-type {
- netconf {
- ned-id lsa-netconf;
- }
- }
- state {
- admin-state unlocked;
- }
-}
-```
-
-Notice the use of the `lsa-remote-node` instead of the `address` (and `port`) as is usually done. This setting identifies the device as a lower-layer LSA node and instructs NSO to use connection information provided under `cluster` configuration.
-
-The value of `lsa-remote-node` references a `cluster remote-node`, such as the following:
-
-```cli
-admin@upper-nso% show cluster remote-node
-remote-node lower-nso-1 {
- address 127.0.2.1;
- authgroup default;
-}
-```
-
-In addition to `devices device`, the `authgroup` value is again required here and refers to `cluster authgroup`, not the device one. Both authgroups must be configured correctly for LSA to function.
-
-Having added device and cluster configuration for all RFS nodes, you should update the SSH host keys for both, the `/devices/device` and `/cluster/remote-node` paths. For example:
-
-```cli
-admin@upper-nso% request devices device lower-nso-* ssh fetch-host-keys
-admin@upper-nso% request cluster remote-node lower-nso-* ssh fetch-host-keys
-```
-
-Moreover, the RFS NSO nodes have an extra configuration that may not be visible to the CFS node, resulting in out-of-sync behavior. You are strongly encouraged to set the `out-of-sync-commit-behaviour` value to `accept`, with a command such as:
-
-```cli
-admin@upper-nso% set devices device lower-nso-* out-of-sync-commit-behaviour accept
-```
-
-At the same time you should also enable the `/cluster/device-notifications`, which will allow the CFS node to receive the forwarded device notifications from the RFS nodes, and `/cluster/commit-queue`, to enable the commit queue support for LSA. Without the latter, you will not be able to use the `commit commit-queue async` command, for example.
-
-If you wish to enable the commit queue by default, you should do so by setting the `/devices/global-settings/commit-queue/enabled-by-default` on the CFS node. Do not use per device or per device group configuration, for the same reason you should avoid it on the RFS nodes.
-
-#### Multi-Version Deployment
-
-If you plan a single-version deployment, the preceding steps are sufficient. For a multi-version deployment, on the other hand, there are two additional tasks to perform.
-
-First, you will need to install the correct Cisco-NSO LSA NED package (or packages if you need to support more versions). Each NSO release includes these packages that are specifically tailored for LSA. They are used by the upper CFS node if the lower RFS nodes are running a different version than the CFS node itself. The packages are named `cisco-nso-nc-X.Y` where X.Y are the two most significant numbers of the NSO release (the major version) that the package supports. So, if your RFS nodes are running NSO 5.7.2, for example, you should use `cisco-nso-nc-5.7`.
-
-These packages are found in the `$NCS_DIR/packages/lsa` directory. Each package contains the complete model of the `ncs` namespace for the corresponding NSO version, compiled as an LSA NED. Please always use the `cisco-nso` package included with the NSO version of the upper CFS node and not some older variant (such as the one from the lower RFS node) as it may not work correctly.
-
-Second, installing the cisco-nso LSA NED package will make the corresponding `ned-id` available, such as `cisco-nso-nc-5.7` (`ned-id` matches the package name). Use this `ned-id` for the RFS nodes instead of `lsa-netconf`. For example:
-
-```cli
-admin@upper-nso% show devices device | display-level 4
-device lower-nso-1 {
- lsa-remote-node lower-nso-1;
- authgroup default;
- device-type {
- netconf {
- ned-id cisco-nso-nc-5.7;
- }
- }
- state {
- admin-state unlocked;
- }
-}
-```
-
-This configuration allows the CFS node to communicate with a different NSO version but there are still some limitations. The upper CFS node must have the same or newer version than the managed RFS nodes. For all the currently supported versions of the lower node, the packages can be found in the `$NCS_DIR/packages/lsa` directory, but you may also be able to build an older one yourself.
-
-In case you already have a single-version deployment using the `lsa-netconf` `ned-id'`s, you can use the NED migrate procedure to switch to the new `ned-id` and multi-version deployment.
-
-### Device Compiled RFS Services
-
-Besides adding managed lower-layer nodes, the upper-layer node also requires packages for the services. Obviously, you must add the CFS package, which is an ordinary service package, to the CFS node. But you must also provide the device compiled RFS YANG models to allow provisioning of RFSs on the remote RFS nodes.
-
-The process resembles the way you create and compile device YANG models in normal NED packages. The `ncs-make-package` tool provides the `--lsa-netconf-ned` option, where you specify the location of the RFS YANG model and the tool creates a NED package for you. This is a new package that is separate from the RFS package used in the RFS nodes, so you might want to name it differently to avoid confusion. The following text uses the `-ned` suffix.
-
-Usually, you would also provide the `--no-netsim`, `--no-java`, and `--no-python` switches to the invocation, as the package is used with the NETCONF protocol and doesn't need any additional code. The `--no-netsim` option is required because netsim is not supported for these types of packages. For example:
-
-```bash
-ncs-make-package --no-netsim --no-java --no-python \
- --lsa-netconf-ned ./path/to/rfs/src/yang \
- myrfs-service-ned
-```
-
-In this case, there is no explicit `--lsa-lower-nso` option specified and `ncs-make-package` will by default be set up to compile the package for the single version deployment, tied to the `lsa-netconf` `ned-id`. That means the models in the NED can be used with devices that have a `lsa-netconf` `ned-id` configured.
-
-To compile it for the multi-version deployment, which uses a different `ned-id`, you must select the target NSO version with the `--lsa-lower-nso cisco-nso-nc-X.Y` option, for example:
-
-```bash
-ncs-make-package --no-netsim --no-java --no-python \
- --lsa-netconf-ned ./path/to/rfs/src/yang \
- --lsa-lower-nso cisco-nso-nc-5.7
- myrfs-service-ned
-```
-
-Depending on the RFS model, the package may fail to compile, even though the model compiles fine as a service. A typical error would indicate some node from a module, such as `tailf-ncs`, is not found. The reason is that the original RFS service YANG model has dependencies on other YANG models that are not included in the compilation process.
-
-One solution to this problem is to remove the dependencies in the YANG model before compilation. Normally this can be solved by changing the datatype in the NED compiled copy of the YANG model, for example from `leafref` or `instance-identifier` to string. This is only needed for the NED compiled copy, the lower RFS node YANG model can remain the same. There will then be an implicit conversion between types, at runtime, in the communication between the upper CFS node and the lower RFS node.
-
-An alternate solution, if you are doing a single version deployment and there are dependencies on the `tailf-ncs` namespace, is to switch to a multi-version deployment because the `cisco-nso` package includes this namespace (device compiled). Here, the NSO versions match but you are still using the `cisco-nso-nc-X.Y` `ned-id` and have to follow the instructions for the multi-version deployment.
-
-Once you have both, the CFS and device-compiled RFS service packages are ready; add them to the CFS node, then invoke a `sync-from` action to complete the setup process.
-
-### Example Walkthrough
-
-You can see all the required setup steps for a single version deployment performed in the example [examples.ncs/layered-services-architecture/lsa-single-version-deployment](https://github.com/NSO-developer/nso-examples/tree/6.6/layered-services-architecture/lsa-single-version-deployment) and the [examples.ncs/layered-services-architecture/lsa-multi-version-deployment](https://github.com/NSO-developer/nso-examples/tree/6.6/layered-services-architecture/lsa-multi-version-deployment) has the steps for the multi-version one. The two are quite similar but the multi-version deployment has additional steps, so it is the one described here.
-
-First, build the example for manual setup.
-
-```bash
-$ make clean manual
-$ make start-manual
-$ make cli-upper-nso
-```
-
-Then configure the nodes in the cluster. This is needed so that the upper CFS node can receive notifications from the lower RFS node and prepare the upper CFS node to be used with the commit queue.
-
-```cli
-> configure
-
-% set cluster device-notifications enabled
-% set cluster remote-node lower-nso-1 authgroup default username admin
-% set cluster remote-node lower-nso-1 address 127.0.0.1 port 2023
-% set cluster remote-node lower-nso-2 authgroup default username admin
-% set cluster remote-node lower-nso-2 address 127.0.0.1 port 2024
-% set cluster commit-queue enabled
-% commit
-% request cluster remote-node lower-nso-* ssh fetch-host-keys
-```
-
-To be able to handle the lower NSO node as an LSA node, the correct version of the `cisco-nso-nc` package needs to be installed. In this example, 5.4 is used.
-
-Create a link to the `cisco-nso` package in the packages directory of the upper CFS node:
-
-```bash
-$ ln -sf ${NCS_DIR}/packages/lsa/cisco-nso-nc-5.4 upper-nso/packages
-```
-
-Reload the packages:
-
-```cli
-% exit
-> request packages reload
-
-e>>> System upgrade is starting.
->>> Sessions in configure mode must exit to operational mode.
->>> No configuration changes can be performed until upgrade has completed.
->>> System upgrade has completed successfully.
-reload-result {
- package cisco-nso-nc-5.4
- result true
-}
-```
-
-Now when the `cisco-nso-nc` package is in place, configure the two lower NSO nodes and `sync-from` them:
-
-```cli
-> configure
-Entering configuration mode private
-
-% set devices device lower-nso-1 device-type netconf ned-id cisco-nso-nc-5.4
-% set devices device lower-nso-1 authgroup default
-% set devices device lower-nso-1 lsa-remote-node lower-nso-1
-% set devices device lower-nso-1 state admin-state unlocked
-% set devices device lower-nso-2 device-type netconf ned-id cisco-nso-nc-5.4
-% set devices device lower-nso-2 authgroup default
-% set devices device lower-nso-2 lsa-remote-node lower-nso-2
-% set devices device lower-nso-2 state admin-state unlocked
-
-% commit
-Commit complete.
-
-% request devices fetch-ssh-host-keys
-fetch-result {
- device lower-nso-1
- result updated
- fingerprint {
- algorithm ssh-ed25519
- value 4a:c6:5d:91:6d:4a:69:7a:4e:0d:dc:4e:51:51:ee:e2
- }
-}
-fetch-result {
- device lower-nso-2
- result updated
- fingerprint {
- algorithm ssh-ed25519
- value 4a:c6:5d:91:6d:4a:69:7a:4e:0d:dc:4e:51:51:ee:e2
- }
-}
-
-% request devices sync-from
-sync-result {
- device lower-nso-1
- result true
-}
-sync-result {
- device lower-nso-2
- result true
-}
-```
-
-Now, for example, the configured devices of the lower nodes can be viewed:
-
-```cli
-% show devices device config devices device | display xpath | display-level 5
-
-/devices/device[name='lower-nso-1']/config/ncs:devices/device[name='ex0']
-/devices/device[name='lower-nso-1']/config/ncs:devices/device[name='ex1']
-/devices/device[name='lower-nso-1']/config/ncs:devices/device[name='ex2']
-/devices/device[name='lower-nso-2']/config/ncs:devices/device[name='ex3']
-/devices/device[name='lower-nso-2']/config/ncs:devices/device[name='ex4']
-/devices/device[name='lower-nso-2']/config/ncs:devices/device[name='ex5']
-```
-
-Or, alarms inspected:
-
-```cli
-% run show devices device lower-nso-1 live-status alarms summary
-
-live-status alarms summary indeterminates 0
-live-status alarms summary criticals 0
-live-status alarms summary majors 0
-live-status alarms summary minors 0
-live-status alarms summary warnings 0
-```
-
-Now, create a netconf package on the upper CFS node which can be used towards the `rfs-vlan` service on the lower RFS node, in the shell terminal window, do the following:
-
-```bash
-$ ncs-make-package --no-netsim --no-java --no-python \
- --lsa-netconf-ned package-store/rfs-vlan/src/yang \
- --lsa-lower-nso cisco-nso-nc-5.4 \
- --package-version 5.4 --dest upper-nso/packages/rfs-vlan-nc-5.4 \
- --build rfs-vlan-nc-5.4
-```
-
-The created NED is an `lsa-netconf-ned` based on the YANG files of the `rfs-vlan` service:
-
-```
---lsa-netconf-ned package-store/rfs-vlan/src/yang
-```
-
-The version of the NED reflects the version of the nso on the lower node:
-
-```
---package-version 5.4
-```
-
-The package will be generated in the packages directory of the upper NSO CFS node:
-
-```
---dest upper-nso/packages/rfs-vlan-nc-5.4
-```
-
-And, the name of the package will be:
-
-```
-rfs-vlan-nc-5.4
-```
-
-Install the `cfs-vlan` service on the upper CFS node. In the shell terminal window, do the following:
-
-```bash
-$ ln -sf ../../package-store/cfs-vlan upper-nso/packages
-```
-
-Reload the packages once more to get the `cfs-vlan` package. In the CLI terminal window, do the following:
-
-```cli
-% exit
-
-> request packages reload
-
->>> System upgrade is starting.
->>> Sessions in configure mode must exit to operational mode.
->>> No configuration changes can be performed until upgrade has completed.
->>> System upgrade has completed successfully.
-reload-result {
- package cfs-vlan
- result true
-}
-reload-result {
- package cisco-nso-nc-5.4
- result true
-}
-reload-result {
- package rfs-vlan-nc-5.4
- result true
-}
-
-> configure
-Entering configuration mode private
-```
-
-Now, when all packages are in place a `cfs-vlan` service can be configured. The `cfs-vlan` service will dispatch service data to the right lower RFS node depending on the device names used in the service.
-
-In the CLI terminal window, verify the service:
-
-```cli
-% set cfs-vlan v1 a-router ex0 z-router ex5 iface eth3 unit 3 vid 77
-
-% commit dry-run
-.....
- local-node {
- data devices {
- device lower-nso-1 {
- config {
- services {
- + vlan v1 {
- + router ex0;
- + iface eth3;
- + unit 3;
- + vid 77;
- + description "Interface owned by CFS: v1";
- + }
- }
- }
- }
- device lower-nso-2 {
- config {
- services {
- + vlan v1 {
- + router ex5;
- + iface eth3;
- + unit 3;
- + vid 77;
- + description "Interface owned by CFS: v1";
- + }
- }
- }
- }
- }
-.....
-```
-
-As `ex0` resides on `lower-nso-1` that part of the configuration goes there and the `ex5` part goes to `lower-nso-2`.
-
-### Migration and Upgrades
-
-Since an LSA deployment consists of multiple NSO nodes (or HA pairs of nodes), each can be upgraded to a newer NSO version separately. While that offers a lot of flexibility, it also makes upgrades more complex in many cases. For example, performing a major version upgrade on the upper CFS node only will make the deployment Multi-Version even if it was Single-Version before the upgrade, requiring additional action on your part.
-
-In general, staying with the Single-Version Deployment is the simplest option and does not require any further LSA-specific upgrade action (except perhaps recompiling the packages). However, the main downside is that, at least for a major upgrade, you must upgrade all the nodes at the same time (otherwise, you no longer have a Single-Version Deployment).
-
-If that is not feasible, the solution is to run a Multi-Version Deployment. Along with all of the requirements, the section [Multi-Version Deployment](layered-service-architecture.md#ncs_lsa.lsa_setup.multi_version) describes a major difference from the Single Version variant: the upper CFS node uses a version-specific `cisco-nso-nc-X.Y` NED ID to refer to lower RFS nodes. That means, if you switch to a Multi-Version Deployment, or perform a major upgrade of the lower-layer RFS node, the `ned-id` should change accordingly. However, do not change it directly but follow the correct NED upgrade procedure described in the section called [NED Migration](../management/ned-administration.md#sec.ned_migration). Briefly, the procedure consists of these steps:
-
-1. Keep the currently configured ned-id for an RFS device and the corresponding packages. If upgrading the CFS node, you will need to recompile the packages for the new NSO version.
-2. Compile and load the packages that are device-compiled with the new `ned-id`, alongside the old packages.
-3. Use the `migrate` action on a device to switch over to the new `ned-id`.
-
-The procedure requires you to have two versions of the device-compiled RFS service packages loaded in the upper CFS node when calling the `migrate` action: one version compiled by referencing the old (current) NED ID and the other one by referencing the new (target) NED ID.
-
-To illustrate, suppose you currently have an upper-layer and a lower-layer node both running NSO 5.4. The nodes were set up as described in the Single-Version Deployment option, with the upper CFS node using the `tailf-ncs-ned:lsa-netconf` NED ID for the lower-layer RFS node. The CFS node also uses the `rfs-vlan-ned` NED package for the `rfs-vlan` service.
-
-Now you wish to upgrade the CFS node to NSO 5.7 but keep the RFS node on the existing version 5.4. Before upgrading the CFS node, you create a backup and recompile the `rfs-vlan-ned` package for NSO 5.7. Note that the package references the `lsa-netconf` `ned-id`, which is the `ned-id` configured for the RFS device in the CFS node's CDB. Then, you perform the CFS node upgrade as usual.
-
-At this point the CFS node is running the new, 5.7 version and the RFS node is running 5.4. Since you now have a Multi-Version Deployment, you should migrate to the correct `ned-id` as well. Therefore, you prepare the `rfs-vlan-nc-5.4` package, as described in the Multi-Version Deployment option, compile the package, and load it into the CFS node. Thanks to the NSO CDM feature, both packages, `rfs-vlan-nc-5.4` and `rfs-vlan-ned`, can be used at the same time.
-
-With the packages ready, you execute the `devices device lower-nso-1 migrate new-ned-id cisco-nso-nc-5.4` command on the CFS node. The command configures the RFS device entry on CFS to use the new `cisco-nso-nc-5.4 ned-id`, as well as migrates the device configuration and service meta-data to the new model. Having completed the upgrade, you can now remove the `rfs-vlan-ned` if you wish.
-
-Later on, you may decide to upgrade the RFS node to NSO 5.6. Again, you prepare the new `rfs-vlan-nc-5.6` package for the CFS node in a similar way as before, now using the `cisco-nso-nc-5.6` ned-id instead of `cisco-nso-nc-5.4`. Next, you perform the RFS node upgrade to 5.6 and finally migrate the RFS device on the CFS node to the `cisco-nso-nc-5.6 ned-id`, with the `migrate` action.
-
-Likewise, you can return to the Single-Version Deployment, by upgrading the RFS node to the NSO 5.7, reusing the old, or preparing anew, the `rfs-vlan-ned` package and migrating to the `lsa-netconf ned-id`.
-
-All these `ned-id` changes stem from the fact that the upper-layer CFS node treats the lower-layer RFS node as a managed device, requiring the correct model, just like it does for any other device type. For the same reason, maintenance (bug fix or patch) NSO upgrades do not result in a changed `ned-id`, so for those, no migration is necessary.
-
-The [NSO example set](https://github.com/NSO-developer/nso-examples/tree/6.6/layered-services-architecture) illustrates different aspects of LSA deployment including working with single- and multi-version deployments.
-
-### User Authorization Passthrough
-
-In LSA, northbound users are authenticated on the CFS, and the request is re-authenticated on the RFS using either a system user or user/pass passthrough.
-
-For token-based authentication using external auth/package auth, this becomes a problem as the user and password are not expected to be locally provisioned and hence cannot be used for authentication towards the RFS, which leaves the option of a system user.
-
-Using a system user has two major limitations:
-
-* Auditing on the RFS becomes hard, as system sessions are not logged in the `audit.log`.
-* Device-level RBAC becomes challenging as the devices reside in the RFS and the user information is lost.
-
-To handle this scenario, one can enable the passthrough of the user name and its groups to lower layer nodes to allow the session on the RFS to assume the same user as used on the CFS (similar to use of "sudo"). This will allow for the use of a system user between the CFS and RFS while allowing for auditing and RBAC on the RFS using the locally authenticated user on the CFS.
-
-On the CFS node, create an authgroup under `/devices/authgroups/group` with the `/devices/authgroups/group/{umap,default-map}/passthrough` empty leaf set, then select this authgroup on the configured RFS nodes by setting the `/devices/device/authgroup` leaf. When the passthrough leaf is set and a user (e.g., alice) on the CFS node connects to an RFS node, she will authenticate using the credentials specified in the `/devices/device/authgroup` authgroup (e.g., `lsa_passthrough_user` : `ahVaesai8Ahn0AiW`). Once the authentication completes successfully, the user `lsa_passthrough_user` changes into alice on the RFS node.
-
-{% code overflow="wrap" %}
-```bash
-admin@cfs% set devices authgroups group rfs-east default-map remote-name lsa_passthrough_user remote-password ahVaesai8Ahn0AiW passthrough
-admin@cfs% set devices device rfs1 authgroup rfs-east
-admin@cfs% set devices device rfs2 authgroup rfs-east
-admin@cfs% commit
-```
-{% endcode %}
-
-On the RFS node, configure the mapping of permitted users in the `/cluster/global-settings/passthrough/permit` list. The key of the permit list specifies what user may change into a different user. The different possible users to change into are specified by the `as-user` leaf-list, and the `as-group` leaf-list specifies valid groups. The user will end up with the intersection of groups in the user session on the CFS and the groups specified by the `as-group` leaf-list. Only users in the permit list will be allowed to change into the users set in the permit list elements `as-user` list.
-
-{% code overflow="wrap" %}
-```bash
-admin@rfs1% set cluster global-settings passthrough permit lsa_passthrough_user as-user [ alice bob carol ] as-group [ oper dev ]
-admin@rfs1% commit
-```
-{% endcode %}
-
-To allow the passthrough user to change into any user, set the `as-any-user` leaf, or for any group, set the `as-any-group` leaf. Use this with care as setting these leafs will allow the `lsa_passthrough_user` to elevate privileges by changing to `user admin` / `group admin`.
-
-{% code overflow="wrap" %}
-```bash
-admin@rfs1% set cluster global-settings passthrough permit lsa_passthrough_user as-any-user as-any-group
-admin@rfs1% commit
-```
-{% endcode %}
diff --git a/administration/advanced-topics/locks.md b/administration/advanced-topics/locks.md
deleted file mode 100644
index 41d86cfc..00000000
--- a/administration/advanced-topics/locks.md
+++ /dev/null
@@ -1,73 +0,0 @@
----
-description: Learn about different transaction locks in NSO and their interactions.
----
-
-# Locks
-
-This section explains the different locks that exist in NSO and how they interact. It is important to understand the architecture of NSO with its management backplane and the transaction state machine as described in [Package Development](../../development/advanced-development/developing-packages.md) to be able to understand how the different locks fit into the picture.
-
-## Global Locks
-
-The NSO management backplane keeps a lock on the datastore running. This lock is usually referred to as the global lock, and it provides a mechanism to grant exclusive access to the datastore.
-
-The global is the only lock that can explicitly be taken through a northbound agent, for example, by the NETCONF `` operation, or by calling `Maapi.lock()`.
-
-A global lock can be taken for the whole datastore, or it can be a partial lock (for a subset of the data model). Partial locks are exposed through NETCONF and MAAPI and are only supported for operations toward the running datastore.
-
-An agent can request a global lock to ensure that it has exclusive write access. When a global lock is held by an agent, it is not possible for anyone else to write to the datastore that the lock guards—this is enforced by the transaction engine. A global lock on running is granted to an agent if there are no other holders of it (including partial locks) and if all data providers approve the lock request. Each data provider (CDB and/or external data providers) will have its `lock()` callback invoked to get a chance to refuse or accept the lock. The output of `ncs --status` includes the locking status. For each user session, locks (if any) per datastore, is listed.
-
-## Transaction Locks
-
-A northbound agent starts a user session towards NSO's management backplane. Each user session can then start multiple transactions. A transaction is either read/write or read-only.
-
-The transaction engine has its internal locks towards the running datastore. These transaction locks exist to serialize configuration updates towards the datastore and are separate from the global locks.
-
-As a northbound agent wants to update the running datastore with a new configuration, it will implicitly grab and release the transactional lock. The transaction engine takes care of managing the locks as it moves through the transaction state machine, and there is no API that exposes the transactional locks to the northbound agents.
-
-When the transaction engine wants to take a lock for a transaction (for example, when entering the validate state), it first checks that no other transaction has the lock. Then it checks that no user session has a global lock on that datastore. Finally, each data provider is invoked by its `transLock()` callback.
-
-## Northbound Agents and Global Locks
-
-In contrast to the implicit transactional locks, some northbound agents expose explicit access to the global locks. This is done a bit differently by each agent.
-
-The management API exposes the global locks by providing `Maapi.lock()` and `Maapi.unlock()` methods (and the corresponding `Maapi.lockPartial()` `Maapi.unlockPartial()` for partial locking). Once a user session is established (or attached to), these functions can be called.
-
-In the CLI, the global locks are taken when entering different configure modes as follows:
-
-* `config exclusive`: The running datastore global lock will be taken.
-* `config terminal`: Does not grab any locks.
-
-The global lock is then kept by the CLI until the configure mode is exited.
-
-The Web UI behaves in the same way as the CLI (it presents three edit tabs called **Edit private**, **Edit exclusive**, and which correspond to the CLI modes described above).
-
-The NETCONF agent translates the `` operation into a request for the global lock for the requested datastore. Partial locks are also exposed through the partial-lock RPC.
-
-## External Data Providers
-
-Implementing the `lock()` and `unlock()` callbacks is not required of an external data provider. NSO will never try to initiate the `transLock()` state transition (see the transaction state diagram in [Package Development](../../development/advanced-development/developing-packages.md)) towards a data provider while a global lock is taken—so the reason for a data provider to implement the locking callbacks is if someone else can write (or lock, for example, to take a backup) to the data provider's database.
-
-## CDB and Locks
-
-CDB ignores the `lock()` and `unlock()` callbacks (since the data-provider interface is the only write interface towards it).
-
-CDB has its own internal locks on the database. The running datastore has a single write and multiple read locks. It is not possible to grab the write lock on a datastore while there are active read locks on it. The locks in CDB exist to make sure that a reader always gets a consistent view of the data (in particular, it becomes very confusing if another user is able to delete configuration nodes in between calls to `getNext()` on YANG list entries).
-
-During a transaction, `transLock()` takes a CDB read lock towards the transaction's datastore, and `writeStart()` tries to release the read lock and grab the write lock instead.
-
-A CDB external reader client implicitly takes a CDB read lock between `Cdb.startSession()` and `Cdb.endSession()` This means that while a CDB client is reading, a transaction can not pass through `writeStart()` (and conversely, a CDB reader can not start while a transaction is in between `writeStart()` and `commit()` or `abort()`).
-
-The operational store in CDB does not have any locks. NSO's transaction engine can only read from it, and the CDB client writes are atomic per write operation.
-
-## Lock Impact on User Sessions
-
-When a session tries to modify a data store that is locked in some way, it will fail. For example, the CLI might print:
-
-```bash
-admin@ncs(config)# commit
-Aborted: the configuration database is locked
-```
-
-Since some of the locks are short-lived (such as a CDB read-lock), NSO is by default configured to retry the failing operation for a short period of time. If the data store still is locked after this time, the operation fails.
-
-To configure this, set `/ncs-config/commit-retry-timeout` in `ncs.conf`.
diff --git a/administration/advanced-topics/restart-strategies-for-service-manager.md b/administration/advanced-topics/restart-strategies-for-service-manager.md
deleted file mode 100644
index d80a2ac8..00000000
--- a/administration/advanced-topics/restart-strategies-for-service-manager.md
+++ /dev/null
@@ -1,7 +0,0 @@
----
-description: Restart strategy for the service manager.
----
-
-# Service Manager Restart
-
-The service manager executes in a Java VM outside of NSO. The `NcsMux` initializes a number of sockets to NSO at startup. These are Maapi sockets and data provider sockets. NSO can choose to close any of these sockets whenever NSO requests the service manager to perform a task, and that task is not finished within the stipulated timeout. If that happens, the service manager must be restarted. The timeout(s) are controlled by several `ncs.conf` parameters found under `/ncs-config/japi`.
diff --git a/administration/get-started.md b/administration/get-started.md
deleted file mode 100644
index 0535e407..00000000
--- a/administration/get-started.md
+++ /dev/null
@@ -1,18 +0,0 @@
----
-description: Administrate and manage NSO.
-icon: chevrons-right
----
-
-# Get Started
-
-## Installation and Deployment
-
-
diff --git a/administration/installation-and-deployment/README.md b/administration/installation-and-deployment/README.md
deleted file mode 100644
index e3c2356d..00000000
--- a/administration/installation-and-deployment/README.md
+++ /dev/null
@@ -1,45 +0,0 @@
----
-description: Learn about different ways to install and deploy NSO.
-icon: download
----
-
-# Installation and Deployment
-
-## Ways to Deploy NSO
-
-* [By installation](./#by-installation)
-* [By using Cisco-provided container images](./#by-using-cisco-provided-container-images)
-
-### By Installation
-
-Choose this way if you want to install NSO on a system. Before proceeding with the installation, decide on the install type.
-
-#### Install Types
-
-The installation of NSO comes in two variants.
-
-{% hint style="info" %}
-Both variants can be installed in **standard mode** or in [**FIPS**](https://www.nist.gov/itl/publications-0/federal-information-processing-standards-fips)**-compliant** mode. See the detailed installation instructions for more information.
-{% endhint %}
-
-
Local Install
Local Install is used for development, lab, and evaluation purposes. It unpacks all the application components, including docs and examples. It can be used by the engineer to run multiple, unrelated, instances of NSO for different labs and demos on a single workstation.
System Install is used when installing NSO for a centralized, always-on, production-grade, system-wide deployment. It is configured as a system daemon that would start and end with the underlying operating system. The default users of admin and operator are not included and the file structure is more distributed.
-
-{% hint style="info" %}
-All the NSO examples and README steps provided with the installation are based on and intended for Local Install only. Use Local Install for evaluation and development purposes only.
-
-System Install should be used only for production deployment. For all other purposes, use the Local Install procedure.
-{% endhint %}
-
-### By Using Cisco-Provided Container Images
-
-Choose this way if you want to run NSO in a container, such as Docker. Visit the link below for more information.
-
-{% content-ref url="containerized-nso.md" %}
-[containerized-nso.md](containerized-nso.md)
-{% endcontent-ref %}
-
-***
-
-> **Supporting Information**
->
-> If you are evaluating NSO, you should have a designated support contact. If you have an NSO support agreement, please use the support channels specified in the agreement. In either case, do not hesitate to reach out to us if you have questions or feedback.
diff --git a/administration/installation-and-deployment/containerized-nso.md b/administration/installation-and-deployment/containerized-nso.md
deleted file mode 100644
index da4045f6..00000000
--- a/administration/installation-and-deployment/containerized-nso.md
+++ /dev/null
@@ -1,870 +0,0 @@
----
-description: Deploy NSO in a containerized setup using Cisco-provided images.
----
-
-# Containerized NSO
-
-NSO can be deployed in your environment using a container, such as Docker. Cisco offers two pre-built images for this purpose that you can use to run NSO and build packages (see [Overview of NSO Images](containerized-nso.md#d5e8294)).
-
-***
-
-**Migration Information**
-
-If you are migrating from an existing NSO System Install to a container-based setup, follow the guidelines given below in [Migration to Containerized NSO](containerized-nso.md#sec.migrate-to-containerizednso).
-
-***
-
-## Use Cases for Containerized Approach
-
-Running NSO in a container offers several benefits that you would generally expect from a containerized approach, such as ease of use and convenient distribution. More specifically, a containerized NSO approach allows you to:
-
-* Run a container image of a specific version of NSO and your packages which can then be distributed as one unit.
-* Deploy and distribute the same version across your production environment.
-* Use the Build Image containing the necessary environment for compiling NSO packages.
-
-## Overview of NSO Images
-
-Cisco provides the following two NSO images based on Red Hat UBI.
-
-* [Production Image](containerized-nso.md#production-image)
-* [Build Image](containerized-nso.md#build-image)
-
-
Intended Use
Develop NSO Packages
Build NSO Packages
Run NSO
NSO Install Type
Development Host
None or Local Install
Build Image
System Install
Production Image
System Install
-
-{% hint style="info" %}
-The Red Hat UBI is an OCI-compliant image that is freely distributable and independent of platform and technical dependencies. You can read more about Red Hat UBI [here](https://www.redhat.com/en/blog/introducing-red-hat-universal-base-image), and about Open Container Initiative (OCI) [here](https://opencontainers.org/faq/).
-{% endhint %}
-
-### Production Image
-
-The Production Image is a production-ready NSO image for system-wide deployment and use. It is based on NSO [System Install](system-install.md) and is available from the [Cisco Software Download](https://software.cisco.com/download/home) site.
-
-Use the pre-built image as the base image in the container file (e.g., Dockerfile) and mount your own packages (such as NEDs and service packages) to run a final image for your production environment (see examples below).
-
-{% hint style="info" %}
-Consult the [Installation](./) documentation for information on installing NSO on a Docker host, building NSO packages, etc.
-{% endhint %}
-
-{% hint style="info" %}
-See [Developing and Deploying a Nano Service](deployment/develop-and-deploy-a-nano-service.md) for an example that uses the container to deploy an SSH-key-provisioning nano service.
-
-The README in the [examples.ncs/getting-started/netsim-sshkey](https://github.com/NSO-developer/nso-examples/tree/6.6/getting-started/netsim-sshkey) example provides a link to the container-based deployment variant of the example. See the `setup_ncip.sh` script and `README` in the `netsim-sshkey` deployment example for details.
-{% endhint %}
-
-### Build Image
-
-The Build Image is a separate standalone NSO image with the necessary environment and software for building packages. It is provided specifically to address the developer needs of building packages.
-
-The image is available as a signed package (e.g., `nso-VERSION.container-image-build.linux.ARCH.signed.bin`) from the Cisco [Software Download](https://software.cisco.com/download/home) site. You can run the Build Image in different ways, and a simple tool for defining and running multi-container Docker applications is [Docker Compose](https://docs.docker.com/compose/) (see examples below).
-
-The container provides the necessary environment to build custom packages. The Build Image adds a few Linux packages that are useful for development, such as Ant, JDK, net-tools, pip, etc. Additional Linux packages can be added using, for example, the `dnf` command. The `dnf list installed` command lists all the installed packages.
-
-## Downloading and Extracting the Images
-
-To fetch and extract NSO images:
-
-1. On Cisco's official [Software Download](https://software.cisco.com/download/home) site, search for "Network Services Orchestrator". Select the relevant NSO version in the drop-down list, e.g., "Crosswork Network Services Orchestrator 6"**,** and click "Network Services Orchestrator Software". Locate the binary, which is delivered as a signed package (e.g., `nso-6.4.container-image-prod.linux.x86_64.signed.bin`).
-2. Extract the image and other files from the signed package, for example:
-
- ```bash
- sh nso-6.4.container-image-prod.linux.x86_64.signed.bin
- ```
-
-{% hint style="info" %}
-**Signed Archive File Pattern**
-
-The signed archive file name has the following pattern:
-
-`nso-VERSION.container-image-PROD_BUILD.linux.ARCH.signed.bin`, where:
-
-* `VERSION` denotes the image's NSO version.
-* `PROD_BUILD` denotes the type of the container (i.e., `prod` for Production, and `build` for Build).
-* `ARCH` is the CPU architecture.
-{% endhint %}
-
-## System Requirements
-
-To run the images, make sure that your system meets the following requirements:
-
-* A system running Linux `x86_64` or `ARM64`, or macOS `x86_64` or Apple Silicon. Linux for production.
-* A container platform. Docker is the recommended platform and is used as an example in this guide for running NSO images. You may use another container runtime of your choice. Note that commands in this guide are Docker-specific. if you use another container runtime, make sure to use the respective commands.
-* To check the Java (JDK) and Python versions included in the container, use the following command, (where `cisco-nso-prod:6.5` is the image you want to check):
-
- {% code title="Example: Check Java and Python Versions of Container" %}
- ```bash
- docker run --rm cisco-nso-prod:6.5 sh -c "java -version && python --version"
- ```
- {% endcode %}
-
-{% hint style="info" %}
-Docker on Mac uses a Linux VM to run the Docker engine, which is compatible with the normal Docker images built for Linux. You do not need to recompile your NSO-in-Docker images when moving between a Linux machine and Docker on Mac as they both essentially run Docker on Linux.
-{% endhint %}
-
-## Administrative Information
-
-This section covers the necessary administrative information about the NSO Production Image.
-
-### Migrate to Containerized NSO Setup
-
-If you have NSO installed as a System Install, you can migrate to the Containerized NSO setup by following the instructions in this section. Migrating your Network Services Orchestrator (NSO) to a containerized setup can provide numerous benefits, including improved scalability, easier version management, and enhanced isolation of services.
-
-The migration process is designed to ensure a smooth transition from a System-Installed NSO to a container-based deployment. Detailed steps guide you through preparing your existing environment, exporting the necessary configurations and state data, and importing them into your new containerized NSO instance. During the migration, consider the container runtime you plan to use, as this impacts the migration process.
-
-**Before You Start**
-
-* We recommend reading through this guide to understand better the expectations, requirements, and functioning aspects of a containerized deployment.
-* Verify the compatibility of your current system configurations with the containerized NSO setup. See [System Requirements](containerized-nso.md#sec.system-reqs) for more information.
-* Note that [NSO runs from a non-root user ](containerized-nso.md#nso-runs-from-a-non-root-user)with the containerized NSO setup[.](containerized-nso.md#nso-runs-from-a-non-root-user)
-* Determine and install the container orchestration tool you plan to use (e.g., Docker, etc.).
-* Ensure that your current NSO installation is fully operational and backed up and that you have a clear rollback strategy in case any issues arise. Pay special attention to customizations and integrations that your current NSO setup might have, and verify their compatibility with the containerized version of NSO.
-* Have a contingency plan in place for quick recovery in case any issues are encountered during migration.
-
-**Migration Steps**
-
-Prepare:
-
-1. Document your current NSO environment's specifics, including custom configurations and packages.
-2. Perform a complete backup of your existing NSO instance, including configurations, packages, and data.
-3. Set up the container environment and download/extract the NSO production image. See [Downloading and Extracting the Images](containerized-nso.md#sec.fetch-images) for details.
-
-Migrate:
-
-1. Stop the current NSO instance.
-2. Save the run directory from the NSO instance in an appropriate place.
-3. Use the same `ncs.conf` and High Availability (HA) setup previously used with your System Install. We assume that the `ncs.conf` follows the best practice and uses the `NCS_DIR`, `NCS_RUN_DIR`, `NCS_CONFIG_DIR`, and `NCS_LOG_DIR` variables for all paths. The `ncs.conf` can be added to a volume and mounted to `/nso/etc` in the container.
-
- ```bash
- docker container create --name temp -v NSO-evol:/nso/etc hello-world
- docker cp ncs.conf temp:/nso/etc
- docker rm temp
- ```
-4. Add the run directory as a volume, mounted to `/nso/run` in the container and copy the CDB data, packages, etc., from the previous System Install instance.
-
- ```bash
- cd path-to-previous-run-dir
- docker container create --name temp -v NSO-rvol:/nso/run hello-world
- docker cp . temp:/nso/run
- docker rm temp
- ```
-5. Create a volume for the log directory.
-
- ```bash
- docker volume create --name NSO-lvol
- ```
-6. Start the container. Example:
-
- ```bash
- docker run -v NSO-rvol:/nso/run -v NSO-evol:/nso/etc -v NSO-lvol:/log -itd \
- --name cisco-nso -e EXTRA_ARGS=--with-package-reload -e ADMIN_USERNAME=admin \
- -e ADMIN_PASSWORD=admin cisco-nso-prod:6.4
- ```
-
-Finalize:
-
-1. Ensure that the containerized NSO instance functions as expected and validate system operations.
-2. Plan and execute your cutover transition from the System-Installed NSO to the containerized version with minimal disruption.
-3. Monitor the new setup thoroughly to ensure stability and performance.
-
-### `ncs.conf` File Configuration and Preference
-
-The `run-nso.sh` script runs a check at startup to determine which `ncs.conf` file to use. The order of preference is as below:
-
-1. The `ncs.conf` file specified in the Dockerfile (i.e., `ENV $NCS_CONFIG_DIR /etc/ncs/`) is used as the first preference.
-2. The second preference is to use the `ncs.conf` file mounted in the `/nso/etc/` run directory.
-3. If no `ncs.conf` file is found at either `/etc/ncs` or `/nso/etc`, the default `ncs.conf` file provided with the NSO image in `/defaults` is used.
-
-{% hint style="info" %}
-If the `ncs.conf` file is edited after startup, it can be reloaded using MAAPI `reload_config()`. Example: `$ ncs_cmd -c "reload"`.
-{% endhint %}
-
-{% hint style="info" %}
-The default `ncs.conf` file in `/defaults` has a set of environment variables that can be used to enable interfaces (all interfaces are disabled by default) which is useful when spinning up the Production container for quick testing. An interface can be enabled by setting the corresponding environment variable to `true`.
-
-* `NCS_CLI_SSH`: Enables CLI over SSH on port `2024`.
-* `NCS_WEBUI_TRANSPORT_TCP`: Enables JSON-RPC and RESTCONF over TCP on port `8080`.
-* `NCS_WEBUI_TRANSPORT_SSL`: Enables JSON-RPC and RESTCONF over SSL/TLS on port `8888`.
-* `NCS_NETCONF_TRANSPORT_SSH`: Enables NETCONF over SSH on port `2022`.
-* `NCS_NETCONF_TRANSPORT_TCP`: Enables NETCONF over TCP on port `2023`.
-{% endhint %}
-
-### Pre- and Post-Start Scripts
-
-If you need to perform operations before or after the `ncs` process is started in the Production container, you can use Python and/or Bash scripts to achieve this. Add the scripts to the `$NCS_CONFIG_DIR/pre-ncs-start.d/` and `$NCS_CONFIG_DIR/post-ncs-start.d/` directories to have the `run-nso.sh` script run them.
-
-### NSO Runs from a Non-Root User
-
-NSO is installed with the `--run-as-user` option for build and production containers to run NSO from the non-root `nso` user that belongs to the `nso` user group.
-
-When migrating from container versions where NSO has `root` privilege, ensure the `nso` user owns or has access rights to the required files and directories. Examples include application directories, SSH host keys, SSH keys used to authenticate with devices, etc. See the deployment example variant referenced by the [examples.ncs/getting-started/netsim-sshkey/README.md](https://github.com/NSO-developer/nso-examples/tree/6.6/getting-started/netsim-sshkey) for an example.
-
-The NSO container runs a script called `take-ownership.sh` as part of its startup, which takes ownership of all the directories that NSO needs. The script will be one of the first things to run. The script can be overridden to take ownership of even more directories, such as mounted volumes or bind mounts.
-
-### Admin User Creation
-
-An admin user can be created on startup by the run script in the container. Three environment variables control the addition of an admin user:
-
-* `ADMIN_USERNAME`: Username of the admin user to add, default is `admin`.
-* `ADMIN_PASSWORD`: Password of the admin user to add.
-* `ADMIN_SSHKEY`: Private SSH key of the admin user to add.
-
-As `ADMIN_USERNAME` already has a default value, only `ADMIN_PASSWORD`, or `ADMIN_SSHKEY` need to be set in order to create an admin user. For example:
-
-```bash
-docker run -itd --name cisco-nso -e ADMIN_PASSWORD=admin cisco-nso-prod:6.4
-```
-
-This can be useful when starting up a container in CI for testing or development purposes. It is typically not required in a production environment where CDB already contains the required user accounts.
-
-{% hint style="info" %}
-When using a permanent volume for CDB, and restarting the NSO container multiple times with a different `ADMIN_USERNAME` or `ADMIN_PASSWORD`, the start script uses these environment variables to generate an XML file named `add_admin_user.xml`. The generated XML file is added to the CDB directory to be read at startup. But if the persisted CDB configuration file already exists in the CDB directory, NSO will not load any XML files at startup, instead the generated `add_admin_user.xml` in the CDB directory needs to be loaded manually.
-{% endhint %}
-
-{% hint style="info" %}
-The default `ncs.conf` file performs authentication using only the Linux PAM, with local authentication disabled. For the `ADMIN_USERNAME`, `ADMIN_PASSWORD`, and `ADMIN_SSHKEY` variables to take effect, NSO's local authentication, in `/ncs-conf/aaa/local-authentication`, needs to be enabled. Alternatively, you can create a local Linux admin user that is authenticated by NSO using Linux PAM.
-{% endhint %}
-
-### Exposing Ports
-
-The default `ncs.conf` NSO configuration file does not enable any northbound interfaces, and no ports are exposed externally to the container. Ports can be exposed externally of the container when starting the container with the northbound interfaces and their ports correspondingly enabled in `ncs.conf`.
-
-### Backup and Restore
-
-The backup behavior of running NSO in vs. outside the container is largely the same, except that when running NSO in a container, the SSH and SSL certificates are not included in the backup produced by the `ncs-backup` script. This is different from running NSO outside a container where the default configuration path `/etc/ncs` is used to store the SSH and SSL certificates, i.e., `/etc/ncs/ssh` for SSH and `/etc/ncs/ssl` for SSL.
-
-**Take a Backup**
-
-Let's assume we start a production image container using:
-
-```bash
-docker run -d --name cisco-nso -v NSO-vol:/nso -v NSO-log-vol:/log cisco-nso-prod:6.4
-```
-
-To take a backup:
-
-* Run the `ncs-backup` command. The backup file is written to `/nso/run/backups`.
-
- ```bash
- docker exec -it cisco-nso ncs-backup
- INFO Backup /nso/run/backups/ncs-6.4@2024-11-03T11:31:07.backup.gz created successfully
- ```
-
-**Restore a Backup**
-
-To restore a backup, NSO must be stopped. As you likely only have access to the `ncs-backup` tool, the volume containing CDB and other run-time data from inside of the NSO container, this poses a slight challenge. Additionally, shutting down NSO will terminate the NSO container.
-
-To restore a backup:
-
-1. Shut down the NSO container:
-
- ```bash
- docker stop cisco-nso
- docker rm cisco-nso
- ```
-2. Run the `ncs-backup --restore` command. Start a new container with the same persistent shared volumes mounted but with a different command. Instead of running the `/run-nso.sh`, which is the normal command of the NSO container, run the `restore` command.
-
- ```bash
- docker run -u root -it --rm -v NSO-vol:/nso -v NSO-log-vol:/log \
- --entrypoint ncs-backup cisco-nso-prod:6.4 \
- --restore /nso/run/backups/ncs-6.4@2024-11-03T11:31:07.backup.gz
-
- Restore /etc/ncs from the backup (y/n)? y
- Restore /nso/run from the backup (y/n)? y
- INFO Restore completed successfully
- ```
-3. Restoring an NSO backup should move the current run directory (`/nso/run` to `/nso/run.old`) and restore the run directory from the backup to the main run directory (`/nso/run`). After this is done, start the regular NSO container again as usual.\\
-
- ```bash
- docker run -d --name cisco-nso -v NSO-vol:/nso -v NSO-log-vol:/log cisco-nso-prod:6.4
- ```
-
-### SSH Host Key
-
-The NSO image `/run-nso.sh` script looks for an SSH host key named `ssh_host_ed25519_key` in the `/nso/etc/ssh` directory to be used by the NSO built-in SSH server for the CLI and NETCONF interfaces.
-
-If an SSH host key exists, which is for a typical production setup stored in a persistent shared volume, it remains the same after restarts or upgrades of NSO. If no SSH host key exists, the script generates a private and public key.
-
-In a high-availability (HA) setup, the host key is typically shared by all NSO nodes in the HA group and stored in a persistent shared volume. This is done to avoid fetching the public host key from the new primary after each failover.
-
-### HTTPS TLS Certificate
-
-NSO expects to find a TLS certificate and key at `/nso/ssl/cert/host.cert` and `/nso/ssl/cert/host.key` respectively. Since the `/nso` path is usually on persistent shared volume for production setups, the certificate remains the same across restarts or upgrades.
-
-If no certificate is present, one will be generated. It is a self-signed certificate valid for 30 days making it possible to use both in development and staging environments. It is not meant for the production environment. You should replace it with a properly signed certificate for production and it is encouraged to do so even for test and staging environments. Simply generate one and place it at the provided path, for example using the following, which is the command used to generate the temporary self-signed certificate:
-
-```
-openssl req -new -newkey rsa:4096 -x509 -sha256 -days 30 -nodes \
--out /nso/ssl/cert/host.cert -keyout /nso/ssl/cert/host.key \
--subj "/C=SE/ST=NA/L=/O=NSO/OU=WebUI/CN=Mr. Self-Signed"
-```
-
-### YANG Model Changes (destructive)
-
-The database in NSO, called CDB, uses YANG models as the schema for the database. It is only possible to store data in CDB according to the YANG models that define the schema.
-
-If the YANG models are changed, particularly if the nodes are removed or renamed (rename is the removal of one leaf and an addition of another), any data in CDB for those leaves will also be removed. NSO normally warns about this when you attempt to load new packages, for example, `request packages reload` command refuses to reload the packages if the nodes in the YANG model have disappeared. You would then have to add the **force** argument, e.g., `request packages reload force`.
-
-### Health Check
-
-The base Production Image comes with a basic container health check. It uses `ncs_cmd` to get the state that NCS is currently in. Only the result status is observed to check if `ncs_cmd` was able to communicate with the `ncs` process. The result indicates if the `ncs` process is responding to IPC requests.
-
-{% hint style="info" %}
-The default `--health-start-period duration` in health check is set to 60 seconds. NSO will report an `unhealthy` state if it takes more than 60 seconds to start up. To resolve this, set the `--health-start-period duration` value to a relatively higher value, such as 600 seconds, or however long you expect NSO will take to start up.
-
-To disable the health check, use the `--no-healthcheck` command.
-{% endhint %}
-
-### NSO System Dump and Enable Strict Overcommit Accounting on the Host
-
-By default, the Linux kernel allows overcommit of memory. However, memory overcommit produces an unexpected and unreliable environment for NSO since the Linux Out‑Of‑Memory (OOM) killer may terminate NSO without restarting it if the system is critically low on memory.
-
-Also, when the OOM-killer terminates NSO, NSO will not produce a system dump file, and the debug information will be lost. Thus, it is strongly recommended that overcommit is disabled with Linux NSO production container hosts with an overcommit ratio of less than 100% (max). Use a 5% headroom (overcommit\_ratio≈95 when no swap) or increase if the host runs additional services. Or use vm.overcommit\_kbytes for a fixed CommitLimit.
-
-See [Step - 4. Run the Installer](system-install.md#si.run.the.installer) in System Install for information on memory overcommit recommendations for a Linux system hosting NSO production containers.
-
-{% hint style="info" %}
-By default, NSO writes a system dump to the NSO run-time directory, default `NCS_RUN_DIR=/nso/run`. If the `NCS_RUN_DIR` is not pointing to a persistent, host‑mounted volume so dumps survive container restarts or to give the NSO system dump file a unique name, the `NCS_DUMP="/path/to/mounted/dir/ncs_crash.dump.$(date +%Y%m%d-%H%M%S)"` variable needs to be set.
-{% endhint %}
-
-#### Recommended: Host Configured for Strict Overcommit
-
-With the host configured for strict overcommit (`vm.overcommit_memory=2`), containers inherit the host’s CommitLimit behavior. Note that `vm.overcommit_memory`, `vm.overcommit_ratio`, and `vm.overcommit_kbytes` are host‑global and cannot be set per container. These `vm.*` settings are configured on the host and apply to all containers.
-
-* Optionally use the `docker run` command to set memory limits and swap:
- * Use `--memory=` to cap the container’s RAM.
- * Set `--memory-swap=` equal to `--memory` to effectively disable swap for the container.
- * If swap must be enabled, use a fast disk, for example, an NVMe SSD.
-
-#### **Alternative: Heuristic Overcommit Mode**
-
-The alternative, using heuristic overcommit mode, can be useful if the NSO host has severe memory limitations. For example, if RAM sizing for the NSO host did not take into account that the schema (from YANG models) is loaded into memory by NSO Python and Java packages affecting total committed memory (Committed\_AS) and after considering the recommendations in [CDB Stores the YANG Model Schema](../../development/advanced-development/scaling-and-performance-optimization.md#d5e8743).
-
-As an alternative to the recommended strict mode, `vm.overcommit_memory=2`, you can keep `vm.overcommit_memory=0` configured on the host to allow overcommit of memory and trigger `ncs --debug-dump` when Committed\_AS reaches, for example, 95% of CommitLimit or when the container’s cgroup memory usage reaches, for example, 90% of its cap.
-
-* This approach does not prevent the Linux OOM-killer from killing NSO or the container; it only attempts to capture diagnostic data before memory pressure becomes critical. OOM kills can occur even when Committed\_AS < CommitLimit due to cgroup limits or reclaim failure.
-* The same `docker run` memory and swap options as above can be used.
-* Monitor the Committed\_AS vs CommitLimit and cgroup memory usage vs cap using, for example, a script or an observability tool.
- * Note that Committed\_AS and CommitLimit from `/proc/meminfo` are host‑wide values. Inside a container, they reflect the host, not the container’s cgroup budget.
- * cgroup memory.current vs memory.max is the primary predictor for container OOM events; the host CommitLimit is an additional early‑warning signal.
-* Ensure the user running the monitor has permission to execute `ncs --debug-dump` and write to the chosen dump directory.
-
-{% code title="Simple example of an NSO debug-dump monitor inside a container" overflow="wrap" %}
-```bash
-#!/usr/bin/env bash
-# Simple NSO debug-dump monitor inside a container (vm.overcommit_memory=0 on host).
-# Triggers ncs --debug-dump when Committed_AS reaches 95% of CommitLimit
-# or when the container’s cgroup memory usage reaches 90% of its cap.
-
-THRESHOLD_PCT=95 # CommitLimit threshold (5% headroom).
-CGROUP_THRESHOLD_PCT=90 # Trigger when memory.current >= 90% of memory.max.
-POLL_INTERVAL=5 # Seconds between checks.
-PROCESS_CHECK_INTERVAL=30
-DUMP_COUNT=10
-DUMP_DELAY=10
-DUMP_PREFIX="dump"
-
-command -v ncs >/dev/null 2>&1 || { echo "ncs command not found in PATH."; exit 1; }
-
-find_nso_pid() {
- pgrep -x ncs.smp | head -n1 || true
-}
-
-read_cgroup_mem_kb() {
- # Outputs: current_kb max_kb (max_kb=0 if unlimited or not found)
- if [ -r /sys/fs/cgroup/memory.current ]; then
- local cur max
- cur=$(cat /sys/fs/cgroup/memory.current 2>/dev/null)
- max=$(cat /sys/fs/cgroup/memory.max 2>/dev/null)
- [ "$max" = "max" ] && max=0
- echo "$((cur/1024)) $((max/1024))"
- else
- echo "0 0"
- fi
-}
-
-while true; do
- pid="$(find_nso_pid)"
- if [ -z "${pid:-}" ]; then
- echo "NSO not running; retry in ${PROCESS_CHECK_INTERVAL}s..."
- sleep "$PROCESS_CHECK_INTERVAL"
- continue
- fi
-
- committed="$(awk '/Committed_AS:/ {print $2}' /proc/meminfo)"
- commit_limit="$(awk '/CommitLimit:/ {print $2}' /proc/meminfo)"
- if [ -z "$committed" ] || [ -z "$commit_limit" ]; then
- echo "Unable to read /proc/meminfo; retry in ${POLL_INTERVAL}s..."
- sleep "$POLL_INTERVAL"
- continue
- fi
-
- threshold=$(( commit_limit * THRESHOLD_PCT / 100 ))
- read cg_current_kb cg_max_kb < <(read_cgroup_mem_kb)
- cgroup_trigger=0
- if [ "${cg_max_kb:-0}" -gt 0 ]; then
- cgroup_pct=$(( cg_current_kb * 100 / cg_max_kb ))
- [ "$cgroup_pct" -ge "$CGROUP_THRESHOLD_PCT" ] && cgroup_trigger=1
- echo "PID=${pid} Committed_AS=${committed}kB; CommitLimit=${commit_limit}kB; Threshold=${threshold}kB; cgroup=${cg_current_kb}kB/${cg_max_kb}kB (${cgroup_pct}%)."
- else
- echo "PID=${pid} Committed_AS=${committed}kB; CommitLimit=${commit_limit}kB; Threshold=${threshold}kB; cgroup=unlimited."
- fi
-
- if [ "$committed" -ge "$threshold" ] || [ "$cgroup_trigger" -eq 1 ]; then
- echo "Threshold crossed; collecting ${DUMP_COUNT} debug dumps..."
- for i in $(seq 1 "$DUMP_COUNT"); do
- file="${DUMP_PREFIX}.${i}.bin"
- echo "Dump $i -> ${file}"
- if ! ncs --debug-dump "$file"; then
- echo "Debug dump $i failed."
- fi
- sleep "$DUMP_DELAY"
- done
- echo "All debug dumps completed; exiting."
- exit 0
- fi
-
- sleep "$POLL_INTERVAL"
-done
-```
-{% endcode %}
-
-### Startup Arguments
-
-The `/nso-run.sh` script that starts NSO is executed as an `ENTRYPOINT` instruction and the `CMD` instruction can be used to provide arguments to the entrypoint-script. Another alternative is to use the `EXTRA_ARGS` variable to provide arguments. The `/nso-run.sh` script will first check the `EXTRA_ARGS` variable before the `CMD` instruction.
-
-An example using `docker run` with the `CMD` instruction:
-
-```bash
-docker run --name nso -itd cisco-nso-prod:6.4 --with-package-reload \
---ignore-initial-validation
-```
-
-With the `EXTRA_ARGS` variable:
-
-```bash
-docker run --name nso \
--e EXTRA_ARGS='--with-package-reload --ignore-initial-validation' \
--itd cisco-nso-prod:6.4
-```
-
-An example using a Docker Compose file, `compose.yaml`, with the `CMD` instruction:
-
-```
-services:
- nso:
- image: cisco-nso-prod:6.4
- container_name: nso
- command:
- - --with-package-reload
- - --ignore-initial-validation
-```
-
-With the `EXTRA_ARGS` variable:
-
-```
-services:
- nso:
- image: cisco-nso-prod:6.4
- container_name: nso
- environment:
- - EXTRA_ARGS=--with-package-reload --ignore-initial-validation
-```
-
-## Examples
-
-This section provides examples to exhibit the use of NSO images.
-
-### Running the Production Image using Docker CLI
-
-This example shows how to run the standalone NSO Production Image using the Docker CLI.
-
-The instructions and CLI examples used in this example are Docker-specific. If you are using a non-Docker container runtime, you will need to: fetch the NSO image from the Cisco software download site, then load and run the image with packages and networking, and finally log in to NSO CLI to run commands.
-
-If you intend to run multiple images (i.e., both Production and Build), Docker Compose is a tool that simplifies defining and running multi-container Docker applications. See the example ([Running the NSO Images using Docker Compose](containerized-nso.md#sec.example-docker-compose)) below for detailed instructions.
-
-**Steps**
-
-Follow the steps below to run the Production Image using Docker CLI:
-
-1. Start your container engine.
-2. Next, load the image and run it. Navigate to the directory where you extracted the base image and load it. This will restore the image and its tag:
-
-```bash
-docker load -i nso-6.4.container-image-prod.linux.x86_64.tar.gz
-```
-
-3. Start a container from the image. Supply additional arguments to mount the packages and `ncs.conf` as separate volumes ([`-v` flag](https://docs.docker.com/engine/reference/commandline/run/)), and publish ports for networking ([`-p` flag](https://docs.docker.com/engine/reference/commandline/run/)) as needed. The container starts NSO using the `/run-nso.sh` script. To understand how the `ncs.conf` file is used, see [`ncs.conf` File Configuration and Preference](containerized-nso.md#ug.admin_guide.containers.ncs).
-
-```bash
-docker run -itd --name cisco-nso \
--v NSO-vol:/nso \
--v NSO-log-vol:/log \
---net=host \
--e ADMIN_USERNAME=admin \
--e ADMIN_PASSWORD=admin \
-cisco-nso-prod:6.4
-```
-
-{% hint style="warning" %}
-**Overriding Environment Variables**
-
-Overriding basic environment variables (`NCS_CONFIG_DIR`, `NCS_LOG_DIR`, `NCS_RUN_DIR`, etc.) is not supported and therefore should be avoided. Using, for example, the `NCS_CONFIG_DIR` environment variable to mount a configuration directory will result in an error. Instead, to mount your configuration directory, do it appropriately in the correct place, which is under `/nso/etc`.
-{% endhint %}
-
-
-
-Examples: Running the Image with and without Named Volumes
-
-The following examples show how to run the image with and without named volumes.
-
-**Running without a named volume**: This is the minimal way of running the image but does not provide any persistence when the container is destroyed.
-
-```bash
-docker run -itd --name cisco-nso \
--p 8888:8888 \
--e ADMIN_USERNAME=admin\
--e ADMIN_PASSWORD=admin\
-cisco-nso-prod
-```
-
-**Running with a single named volume**: This way provides persistence for the NSO mount point with a `NSO-vol` volume. Logs, however, are not persistent.
-
-```bash
-
-docker run -itd --name cisco-nso \
--v NSO-vol:/nso \
--p 8888:8888 \
--e ADMIN_USERNAME=admin\
--e ADMIN_PASSWORD=admin\
-cisco-nso-prod
-```
-
-\
-**Running with two named volumes**: This way provides full persistence for both the NSO and the log mount points.
-
-```bash
-docker run -itd --name cisco-nso \
--v NSO-vol:/nso \
--v NSO-log-vol:/log \
--p 8888:8888 \
--e ADMIN_USERNAME=admin\
--e ADMIN_PASSWORD=admin\
-cisco-nso-prod
-```
-
-
-
-{% hint style="info" %}
-**Loading the Packages**
-
-* Loading the packages by mounting the default load path `/nso/run` as a volume is preferred. You can also load the packages by copying them manually into the `/nso/run/packages` directory in the container. During development, a bind mount of the package directory on the host machine makes it easy to update packages in NSO by simply changing the packages on the host.
-* The default load path is configured in the `ncs.conf` file as `$NCS_RUN_DIR/packages`, where `$NCS_RUN_DIR` expands to `/nso/run` in the container. To find the load path, check the `ncs.conf` file in the `/etc/ncs/` directory.
-
- ```xml
-
- ${NCS_RUN_DIR}/packages
- ${NCS_DIR}/etc/ncs
- ...
-
- ```
-{% endhint %}
-
-{% hint style="info" %}
-**Logging**
-
-* With the Production Image, use a shared volume to persist data across restarts. If remote (Syslog) logging is used, there is little need to persist logs. If local logging is used, then persistent logging is recommended.
-* NSO starts a cron job to handle logrotate of NSO logs by default. i.e., the `CRON_ENABLE` and `LOGROTATE_ENABLE` variables are set to `true` using the `/etc/logrotate.conf` configuration. See the `/etc/ncs/post-ncs-start.d/10-cron-logrotate.sh` script. To set how often the cron job runs, use the crontab file.
-{% endhint %}
-
-4. Finally, log in to NSO CLI to run commands. Open an interactive shell on the running container and access the NSO CLI.
-
-```bash
-docker exec -it cisco-nso bash
-# ncs_cli -u admin
-admin@ncs>
-```
-
-You can also use the `docker exec -it cisco-nso ncs_cli -u admin` command to access the CLI from the host's terminal.
-
-### Upgrading NSO using Docker CLI
-
-This example describes how to upgrade your NSO to run a newer NSO version in the container. The overall upgrade process is outlined in the steps below. In the example below, NSO is to be upgraded from version 6.3 to 6.4.
-
-To upgrade your NSO version:
-
-1. Start a container with the `docker run` command. In the example below, it mounts the `/nso` directory in the container to the `NSO-vol` named volume to persist the data. Another option is using a bind mount of the directory on the host machine. At this point, the `/cdb` directory is empty.
-
- ```bash
- docker run -itd -—name cisco-nso -v NSO-vol:/nso cisco-nso-prod:6.3
- ```
-2. Perform a backup, either by running the `docker exec` command (make sure that the backup is placed somewhere we have mounted) or by creating a tarball of `/data/nso` on the host machine.
-
- ```bash
- docker exec -it cisco-nso ncs-backup
- ```
-3. Stop the NSO by issuing the following command, or by stopping the container itself which will run the `ncs stop` command automatically.
-
- ```bash
- docker exec -it cisco-nso ncs --stop
- ```
-4. Remove the old NSO.
-
- ```bash
- docker rm -f cisco-nso
- ```
-5. Start a new container and mount the `/nso` directory in the container to the `NSO-vol` named volume. This time the `/cdb` folder is not empty, so instead of starting a fresh NSO, an upgrade will be performed.
-
- ```bash
- docker run -itd --name cisco-nso -v NSO-vol:/nso cisco-nso-prod:6.4
- ```
-
-At this point, you only have one container that is running the desired version 6.4 and you do not need to uninstall the old NSO.
-
-### Running the NSO Images using Docker Compose
-
-This example covers the necessary information to manifest the use of NSO images to compile packages and run NSO. Using Docker Compose is not a requirement, but a simple tool for defining and running a multi-container setup where you want to run both the Production and Build images in an efficient manner.
-
-#### **Packages**
-
-The packages used in this example are taken from the [examples.ncs/getting-started/netsim-sshkey](https://github.com/NSO-developer/nso-examples/tree/6.6/getting-started/netsim-sshkey) example:
-
-* `distkey`: A simple Python + template service package that automates the setup of SSH public key authentication between netsim (ConfD) devices and NSO using a nano service.
-* `ne`: A NETCONF NED package representing a netsim network element that implements a configuration subscriber Python application that adds or removes the configured public key, which the netsim (ConfD) network element checks when authenticating public key authentication clients.
-
-#### **`docker-compose.yaml` - Docker Compose File Example**
-
-A basic Docker Compose file is shown in the example below. It describes the containers running on a machine:
-
-* The Production container runs NSO.
-* The Build container builds the NSO packages.
-* A third `example` container runs the netsim device.
-
-Note that the packages use a shared volume in this simple example setup. In a more complex production environment, you may want to consider a dedicated redundant volume for your packages.
-
-```
- version: '1.0'
- volumes:
- NSO-1-rvol:
-
- networks:
- NSO-1-net:
-
- services:
- NSO-1:
- image: cisco-nso-prod:6.4
- container_name: nso1
- profiles:
- - prod
- environment:
- - EXTRA_ARGS=--with-package-reload
- - ADMIN_USERNAME=admin
- - ADMIN_PASSWORD=admin
- networks:
- - NSO-1-net
- ports:
- - "2024:2024"
- - "8888:8888"
- volumes:
- - type: bind
- source: /path/to/packages/NSO-1
- target: /nso/run/packages
- - type: bind
- source: /path/to/log/NSO-1
- target: /log
- - type: volume
- source: NSO-1-rvol
- target: /nso
- healthcheck:
- test: ncs_cmd -c "wait-start 2"
- interval: 5s
- retries: 5
- start_period: 10s
- timeout: 10s
-
- BUILD-NSO-PKGS:
- image: cisco-nso-build:6.4
- container_name: build-nso-pkgs
- network_mode: none
- profiles:
- - build
- volumes:
- - type: bind
- source: /path/to/packages/NSO-1
- target: /nso/run/packages
-
- EXAMPLE:
- image: cisco-nso-prod:6.4
- container_name: ex-netsim
- profiles:
- - example
- networks:
- - NSO-1-net
- healthcheck:
- test: test -f /nso-run-prod/etc/ncs.conf && ncs-netsim --dir /netsim is-alive ex0
- interval: 5s
- retries: 5
- start_period: 10s
- timeout: 10s
- entrypoint: bash
- command: -c 'rm -rf /netsim
- && mkdir /netsim
- && ncs-netsim --dir /netsim create-network /network-element 1 ex
- && PYTHONPATH=/opt/ncs/current/src/ncs/pyapi ncs-netsim --dir
- /netsim start
- && mkdir -p /nso-run-prod/run/cdb
- && echo "
- default
- admin
- admin
- admin
- "
- > /nso-run-prod/run/cdb/init1.xml
- && ncs-netsim --dir /netsim ncs-xml-init >
- /nso-run-prod/run/cdb/init2.xml
- && sed -i.orig -e "s|127.0.0.1|ex-netsim|"
- /nso-run-prod/run/cdb/init2.xml
- && mkdir -p /nso-run-prod/etc
- && sed -i.orig -e "s||
- |" -e "//{n;s|false
- |
- true|}" defaults/ncs.conf
- && sed -i.bak -e "//{n;s|
- false|true
- |}" defaults/ncs.conf
- && sed "//{n;s|false|
- true|}" defaults/ncs.conf
- > /nso-run-prod/etc/ncs.conf
- && mv defaults/ncs.conf.orig defaults/ncs.conf
- && tail -f /dev/null'
- volumes:
- - type: bind
- source: /path/to/packages/NSO-1/ne
- target: /network-element
- - type: volume
- source: NSO-1-rvol
- target: /nso-run-prod
-```
-
-
-
-Explanation of the Docker Compose File
-
-A description of noteworthy Compose file items is given below.
-
-* **`profiles`**: Profiles can be used to group containers in a Compose file, and they work perfectly for the Production, Build, and netsim containers. By adding multiple containers on the same machine (as a developer normally would), you can easily start the Production, Build, and netsim containers using their respective profiles (`prod`, `build`, and `example`).
-* **The command used in the netsim example**: Creates a directory called `/netsim` where the netsims will be set up, then starts the netsims, followed by generating two `init.xml` files and editing the `ncs.conf` file for the Production container. Finally, it keeps the container running. If you want this to be more elegant, you need a netsim container image with a script in it that is well-documented.
-* **`volumes`**: The Production and Build images are configured intentionally to have the same bind mount with `/path/to/packages/NSO-1` as the source and `/nso/run/packages` as the target. The Production Image mounts both the `/log` and `/nso` directories in the container. The `/log` directory is simply a bind mount, while the `/nso` directory is an actual volume.
-
- \
- Named volumes are recommended over bind mounts as described by the Docker Volumes documentation. The NSO `/run` directory should therefore be mounted as a named volume. However, you can make the `/run` directory a bind mount as well.
-
- The Compose file, typically named `docker-compose.yaml`, declares a volume called `NSO-1-rvol`. This is a named volume and will be created automatically by Compose. You can create this volume externally, at which point this volume must be declared as external. If the external volume doesn't exist, the container will not start.
-
- \
- The `example` netsim container will mount the network element NED in the packages directory. This package should be compiled. Note that the `NSO-1-rvol` volume is used by the `example` container to share the generated `init.xml` and `ncs.conf` files with the NSO Production container.
-* **`healthcheck`**: The image comes with its own health check (similar to the one shown here in Compose), and this is how you configure it yourself. The health check for the netsim `example` container checks if the `ncs.conf` file has been generated, and the first Netsim instance started in the container. You could, in theory, start more netsims inside the container.
-
-
-
-#### **Steps**
-
-Follow the steps below to run the images using Docker Compose:
-
-1. Start the Build container. This starts the services in the Compose file with the profile `build`.
-
- ```bash
- docker compose --profile build up -d
- ```
-2. Copy the packages from the `netsim-sshkey` example and compile them in the NSO Build container. The easiest way to do this is by using the `docker exec` command, which gives more control over what to build and the order of it. You can also do this with a script to make it easier and less verbose. Normally you populate the package directory from the host. Here, we use the packages from an example.
-
- ```bash
- docker exec -it build-nso-pkgs sh -c 'cp -r ${NCS_DIR}/examples.ncs/getting-started \
- /netsim-sshkey/packages ${NCS_RUN_DIR}'
-
- docker exec -it build-nso-pkgs sh -c 'for f in ${NCS_RUN_DIR}/packages/*/src; \
- do make -C "$f" all || exit 1; done'
- ```
-3. Start the netsim container. This outputs the generated `init.xml` and `ncs.conf` files to the NSO Production container. The `--wait` flag instructs to wait until the health check returns healthy.
-
- ```bash
- docker compose --profile example up --wait
- ```
-4. Start the NSO Production container.
-
- ```bash
- docker compose --profile prod up --wait
- ```
-
- \
- At this point, NSO is ready to run the service example to configure the netsim device(s). A bash script (`demo.sh`) that runs the above steps and showcases the `netsim-sshkey` example is given below:
-
- ```
- #!/bin/bash
- set -eu # Abort the script if a command returns with a non-zero exit code or if
- # a variable name is dereferenced when the variable hasn't been set
- GREEN='\033[0;32m'
- PURPLE='\033[0;35m'
- NC='\033[0m' # No Color
-
- printf "${GREEN}##### Reset the container setup\n${NC}";
- docker compose --profile build down
- docker compose --profile example down -v
- docker compose --profile prod down -v
- rm -rf ./packages/NSO-1/* ./log/NSO-1/*
-
- printf "${GREEN}##### Start the build container used for building the NSO NED
- and service packages\n${NC}"
- docker compose --profile build up -d
-
- printf "${GREEN}##### Get the packages\n${NC}"
- printf "${PURPLE}##### NOTE: Normally you populate the package directory from the host.
- Here, we use packages from an NSO example\n${NC}"
- docker exec -it build-nso-pkgs sh -c 'cp -r
- ${NCS_DIR}/examples.ncs/getting-started/netsim-sshkey/packages ${NCS_RUN_DIR}'
-
- printf "${GREEN}##### Build the packages\n${NC}"
- docker exec -it build-nso-pkgs sh -c 'for f in ${NCS_RUN_DIR}/packages/*/src;
- do make -C "$f" all || exit 1; done'
-
- printf "${GREEN}##### Start the simulated device container and setup the example\n${NC}"
- docker compose --profile example up --wait
-
- printf "${GREEN}##### Start the NSO prod container\n${NC}"
- docker compose --profile prod up --wait
-
- printf "${GREEN}##### Showcase the netsim-sshkey example from NSO on the prod container\n${NC}"
- if [[ $# -eq 0 ]] ; then # Ask for input only if no argument was passed to this script
- printf "${PURPLE}##### Press any key to continue or ctrl-c to exit\n${NC}"
- read -n 1 -s -r
- fi
- docker exec -it nso1 sh -c 'sed -i.orig -e "s/make/#make/"
- ${NCS_DIR}/examples.ncs/getting-started/netsim-sshkey/showcase.sh'
- docker exec -it nso1 sh -c 'cd ${NCS_RUN_DIR};
- ${NCS_DIR}/examples.ncs/getting-started/netsim-sshkey/showcase.sh 1'
- ```
-
-### Upgrading NSO using Docker Compose
-
-This example describes how to upgrade NSO when using Docker Compose.
-
-#### **Upgrade to a New Minor or Major Version**
-
-To upgrade to a new minor or major version, for example, from 6.3 to 6.4, follow the steps below:
-
-1. Change the image version in the Compose file to the new version, here 6.4.
-2. Run the `docker compose up --profile build -d` command to start the Build container with the new image.
-3. Compile the packages using the Build container.
-
- ```bash
- docker exec -it build-nso-pkgs sh -c 'for f in
- ${NCS_RUN_DIR}/packages/*/src;do make -C "$f" all || exit 1; done'
- ```
-4. Run the `docker compose up --profile prod --wait` command to start the Production container with the new packages that were just compiled.
-
-#### **Upgrade to a New Maintenance Version**
-
-To upgrade to a new maintenance release version, for example, 6.4.1, follow the steps below:
-
-1. Change the image version in the Compose file to the new version, here 6.4.1.
-2. Run the `docker compose up --profile prod --wait` command.
-
- Upgrading in this way does not require a recompile. Docker detects changes and upgrades the image in the container to the new version.
diff --git a/administration/installation-and-deployment/deployment/deployment-example.md b/administration/installation-and-deployment/deployment/deployment-example.md
deleted file mode 100644
index 5b089dce..00000000
--- a/administration/installation-and-deployment/deployment/deployment-example.md
+++ /dev/null
@@ -1,348 +0,0 @@
----
-description: Understand NSO deployment with an example setup.
----
-
-# Deployment Example
-
-This section shows examples of a typical deployment for a highly available (HA) setup. A reference to an example implementation of the `tailf-hcc` layer-2 upgrade deployment scenario described here, check the NSO example set under [examples.ncs/high-availability/hcc](https://github.com/NSO-developer/nso-examples/tree/6.6/high-availability/hcc). The example covers the following topics:
-
-* Installation of NSO on all nodes in an HA setup
-* Initial configuration of NSO on all nodes
-* HA failover
-* Upgrading NSO on all nodes in the HA cluster
-* Upgrading NSO packages on all nodes in the HA cluster
-
-The deployment examples use both the legacy rule-based and recommended HA Raft setup. See [High Availability](../../management/high-availability.md) for HA details. The HA Raft deployment consists of three nodes running NSO and a node managing them, while the rule-based HA deployment uses only two nodes.
-
-Based on the Raft consensus algorithm, the HA Raft version provides the best fault tolerance, performance, and security and is therefore recommended.
-
-For the HA Raft setup, the NSO nodes `paris.fra`, `london.eng`, and `berlin.ger` nodes make up a cluster of one leader and two followers.
-
-
The HA Raft Deployment Network
-
-For the rule-based HA setup, the NSO nodes `paris` and `london` make up one HA pair — one primary and one secondary.
-
-
The Rule-Based HA Deployment Network
-
-HA is usually not optional for a deployment. Data resides in CDB, a RAM database with a disk-based journal for persistence. Both HA variants can be set up to avoid the need for manual intervention in a failure scenario, where HA Raft does the best job of keeping the cluster up. See [High Availability](../../management/high-availability.md) for details.
-
-## Initial NSO Installation
-
-An NSO system installation on the NSO nodes is recommended for deployments. For System Installation details, see the [System Install](../system-install.md) steps.
-
-In this container-based example, Docker Compose uses a `Dockerfile` to build the container image and install NSO on multiple nodes, here containers. A shell script uses an SSH client to access the NSO nodes from the manager node to demonstrate HA failover and, as an alternative, a Python script that implements SSH and RESTCONF clients.
-
-* An `admin` user is created on the NSO nodes. Password-less `sudo` access is set up to enable the `tailf-hcc` server to run the `ip` command. The manager's SSH client uses public key authentication, while the RESTCONF client uses a token to authenticate with the NSO nodes.
-
- The example creates two packages using the `ncs-make-package` command: `dummy` and `inert`. A third package, `tailf-hcc`, provides VIPs that point to the current HA leader/primary node.
-* The packages are compressed into a `tar.gz` format for easier distribution, but that is not a requirement.
-
-{% hint style="info" %}
-While this deployment example uses containers, it is intended as a generic deployment guide. For details on running NSO in a container, such as Docker, see [Containerized NSO](../containerized-nso.md).
-{% endhint %}
-
-This example uses a minimal Red Hat UBI distribution for hosting NSO with the following added packages:
-
-* NSO's basic dependency requirements are fulfilled by adding the Java Runtime Environment (JRE), OpenSSH, and OpenSSL packages.
-* The OpenSSH server is used for shell access and secure copy to the NSO Linux host for NSO version upgrade purposes. The NSO built-in SSH server provides CLI and NETCONF access to NSO.
-* The NSO services require Python.
-* To fulfill the `tailf-hcc` server dependencies, the `iproute2` utilities and `sudo` packages are installed. See [Dependencies](../../management/high-availability.md#ug.ha.hcc.deps) (in the section [Tailf HCC Package](../../management/high-availability.md#ug.ha.hcc)) for details on dependencies.
-* The `rsyslog` package enables storing an NSO log file from several NSO logs locally and forwarding some logs to the manager.
-* The `arp` command from the `net-tools` and `iputils` (`ping`) packages have been added for demonstration purposes.
-
-The steps in the list below are performed as `root`. Docker Compose will build the container images, i.e., create the NSO installation as `root`.
-
-The `admin` user will only need `root` access to run the `ip` command when `tailf-hcc` adds the Layer 2 VIP address to the leader/primary node interface.
-
-The initialization steps are also performed as `root` for the nodes that make up the HA cluster:
-
-* Create the `ncsadmin` and `ncsoper` Linux user groups.
-* Create and add the `admin` and `oper` Linux users to their respective groups.
-* Perform a system installation of NSO that runs NSO as the `admin` user.
-* The `admin` user is granted access to run the `ip` command from the `vipctl` script as `root` using the `sudo` command as required by the `tailf-hcc` package.
-* The `cmdwrapper` NSO program gets access to run the scripts executed by the `generate-token` action for generating RESTCONF authentication tokens as the current NSO user.
-* Password authentication is set up for the read-only `oper` user for use with NSO only, which is intended for WebUI access.
-* The `root` user is set up for Linux shell access only.
-* The NSO installer, `tailf-hcc` package, application YANG modules, scripts for generating and authenticating RESTCONF tokens, and scripts for running the demo are all available to the NSO and manager containers.
-* `admin` user permissions are set for the NSO directories and files created by the system install, as well as for the `root`, `admin`, and `oper` home directories.
-* The `ncs.crypto_keys` are generated and distributed to all nodes.\
- \
- **Note**: The `ncs.crypto_keys` file is highly sensitive. It contains the encryption keys for all encrypted CDB data, which often includes passwords for various entities, such as login credentials to managed devices.\
- \
- **Note**: In an NSO System Install setup, not only the TLS certificates (HA Raft) or shared token (rule-based HA) need to match between the HA cluster nodes, but also the configuration for encrypted strings, by default stored in `/etc/ncs/ncs.crypto_keys`, needs to match between the nodes in the HA cluster. For rule-based HA, the tokens configured on the secondary nodes are overwritten with the encrypted token of type `aes-256-cfb-128-encrypted-string` from the primary node when the secondary connects to the primary. If there is a mismatch between the encrypted-string configuration on the nodes, NSO will not decrypt the HA token to match the token presented. As a result, the primary node denies the secondary node access the next time the HA connection needs to be re-established with a "Token mismatch, secondary is not allowed" error.
-* For HA Raft, TLS certificates are generated for all nodes.
-* The initial NSO configuration, `ncs.conf`, is updated and in sync (identical) on the nodes.
-* The SSH servers are configured to allow only SSH public key authentication (no password). The `oper` user can use password authentication with the WebUI but has read-only NSO access.
-* The `oper` user is denied access to the Linux shell.
-* The `admin` user can access the Linux shell and NSO CLI using public key authentication.
-* New keys for all users are distributed to the HA cluster nodes and the manager node when the HA cluster is initialized.
-* The OpenSSH server and the NSO built-in SSH server use the same private and public key pairs located under `~/.ssh/id_ed25519`, while the manager public key is stored in the `~/.ssh/authorized_keys` file for both NSO nodes.
-* Host keys are generated for all nodes to allow the NSO built-in SSH and OpenSSH servers to authenticate the server to the client.\
- \
- Each HA cluster node has its own unique SSH host keys stored under `${NCS_CONFIG_DIR}/ssh_host_ed25519_key`. The SSH client(s), here the manager, has the keys for all nodes in the cluster paired with the node's hostname and the VIP address in its `/root/.ssh/known_hosts` file.\
- \
- The host keys, like those used for client authentication, are generated each time the HA cluster nodes are initialized. The host keys are distributed to the manager and nodes in the HA cluster before the NSO built-in SSH and OpenSSH servers are started on the nodes.
-* As NSO runs in containers, the environment variables are set to point to the system install directories in the Docker Compose `.env` file.
-* NSO runs as the non-root `admin` user and, therefore, the NSO system installation is done using the `./nso-${VERSION}.linux.${ARCH}.installer.bin --system-install --run-as-user admin --ignore-init-scripts` options. By default, the NSO installation start script will create a `systemd` system service to run NSO as the `admin` user (default is the `root` user) when NSO is started using the `systemctl start ncs` command.\
- \
- However, this example uses the `--ignore-init-scripts` option to skip installing `systemd` scripts as it runs in a container that does not support `systemd`.\
- \
- The environment variables are copied to a `.pam_environment` file so the `root` and `admin` users can set the required environment variables when those users access the shell via SSH.\
- \
- The `/etc/systemd/system/ncs.service` `systemd` service script is installed as part of the NSO system install, if not using the `--ignore-init-scripts` option, and it can be customized if you would like to use it to start NSO. The script may provide what you need and can be a starting point.
-* The OpenSSH `sshd` and `rsyslog` daemons are started.
-* The packages from the package store are added to the `${NCS_RUN_DIR}/packages` directory before finishing the initialization part in the `root` context.
-* The NSO smart licensing token is set.
-
-## The `ncs.conf` Configuration
-
-* The NSO IPC socket is configured in `ncs.conf` to only listen to localhost 127.0.0.1 connections, which is the default setting.\
- \
- By default, the clients connecting to the NSO IPC socket are considered trusted, i.e., no authentication is required, and the use of 127.0.0.1 with the `/ncs-config/ncs-ipc-address` IP address in `ncs.conf` to prevent remote access. See [Security Considerations](deployment-example.md#ug.admin_guide.deployment.security) and [ncs.conf(5)](../../../resources/man/ncs.conf.5.md) in Manual Pages for more details.
-* `/ncs-config/aaa/pam` is set to enable PAM to authenticate users as recommended. All remote access to NSO must now be done using the NSO host's privileges. See [ncs.conf(5)](../../../resources/man/ncs.conf.5.md) in Manual Pages for details.
-* Depending on your Linux distribution, you may have to change the `/ncs-config/aaa/pam/service` setting. The default value is `common-auth`. Check the file `/etc/pam.d/common-auth` and make sure it fits your needs. See [ncs.conf(5)](../../../resources/man/ncs.conf.5.md) in Manual Pages for details.\
- \
- Alternatively, or as a complement to the PAM authentication, users can be stored in the NSO CDB database or authenticated externally. See [Authentication](../../management/aaa-infrastructure.md#ug.aaa.authentication) for details.
-* RESTCONF token authentication under `/ncs-config/aaa/external-validation` is enabled using a `token_auth.sh` script that was added earlier together with a `generate_token.sh` script. See [ncs.conf(5)](../../../resources/man/ncs.conf.5.md) in Manual Pages for details.\
- \
- The scripts allow users to generate a token for RESTCONF authentication through, for example, the NSO CLI and NETCONF interfaces that use SSH authentication or the Web interface.
-
- The token provided to the user is added to a simple YANG list of tokens where the list key is the username.
-* The token list is stored in the NSO CDB operational data store and is only accessible from the node's local MAAPI and CDB APIs. See the HA Raft and rule-based HA `upgrade-l2/manager-etc/yang/token.yang` file in the examples.
-* The NSO web server HTTPS interface should be enabled under `/ncs-config/webui`, along with `/ncs-config/webui/match-host-name = true` and `/ncs-config/webui/server-name` set to the hostname of the node, following security best practice. If the server needs to serve multiple domains or IP addresses, additional `server-alias` values can be configured. See [ncs.conf(5)](../../../resources/man/ncs.conf.5.md) in Manual Pages for details.
-
- **Note**: The SSL certificates that NSO generates are self-signed:
-
- ```bash
- $ openssl x509 -in /etc/ncs/ssl/cert/host.cert -text -noout
- Certificate:
- Data:
- Version: 1 (0x0)
- Serial Number: 2 (0x2)
- Signature Algorithm: sha256WithRSAEncryption
- Issuer: C=US, ST=California, O=Internet Widgits Pty Ltd, CN=John Smith
- Validity
- Not Before: Dec 18 11:17:50 2015 GMT
- Not After : Dec 15 11:17:50 2025 GMT
- Subject: C=US, ST=California, O=Internet Widgits Pty Ltd
- Subject Public Key Info:
- .......
- ```
-
- Thus, if this is a production environment and the JSON-RPC and RESTCONF interfaces using the web server are not used solely for internal purposes, the self-signed certificate must be replaced with a properly signed certificate. See [ncs.conf(5)](../../../resources/man/ncs.conf.5.md) in Manual Pages under `/ncs-config/webui/transport/ssl/cert-file` and `/ncs-config/restconf/transport/ssl/certFile` for more details.
-* Disable `/ncs-config/webui/cgi` unless needed.
-* The NSO SSH CLI login is enabled under `/ncs-config/cli/ssh/enabled`. See [ncs.conf(5)](../../../resources/man/ncs.conf.5.md) in Manual Pages for details.
-* The NSO CLI style is set to C-style, and the CLI prompt is modified to include the hostname under `/ncs-config/cli/prompt`. See [ncs.conf(5)](../../../resources/man/ncs.conf.5.md) in Manual Pages for details.
-
- ```xml
- \u@nso-\H>
- \u@nso-\H%
-
- \u@nso-\H#
- \u@nso-\H(\m)#
- ```
-* NSO HA Raft is enabled under `/ncs-config/ha-raft`, and the rule-based HA under `/ncs-config/ha`. See [ncs.conf(5)](../../../resources/man/ncs.conf.5.md) in Manual Pages for details.
-* Depending on your provisioned applications, you may want to turn `/ncs-config/rollback/enabled` off. Rollbacks do not work well with nano service reactive FASTMAP applications or if maximum transaction performance is a goal. If your application performs classical NSO provisioning, the recommendation is to enable rollbacks. Otherwise not. See [ncs.conf(5)](../../../resources/man/ncs.conf.5.md) in Manual Pages for details.
-
-## The `aaa_init.xml` Configuration
-
-The NSO System Install places an AAA `aaa_init.xml` file in the `$NCS_RUN_DIR/cdb` directory. Compared to a Local Install for development, no users are defined for authentication in the `aaa_init.xml` file, and PAM is enabled for authentication. NACM rules for controlling NSO access are defined in the file for users belonging to a `ncsadmin` user group and read-only access for a `ncsoper` user group. As seen in the previous sections, this example creates Linux `root`, `admin`, and `oper` users, as well as the `ncsadmin` and `ncsoper` Linux user groups.
-
-PAM authenticates the users using SSH public key authentication without a passphrase for NSO CLI and NETCONF login. Password authentication is used for the `oper` user intended for NSO WebUI login and token authentication for RESTCONF login.
-
-Before the NSO daemon is running, and there are no existing CDB files, the default AAA configuration in the `aaa_init.xml` is used. It is restrictive and is used for this demo with only a minor addition to allow the oper user to generate a token for RESTCONF authentication.
-
-The NSO authorization system is group-based; thus, for the rules to apply to a specific user, the user must be a member of the group to which the restrictions apply. PAM performs the authentication, while the NSO NACM rules do the authorization.
-
-* Adding the `admin` user to the `ncsadmin` group and the `oper` user to the limited `ncsoper` group will ensure that the two users get properly authorized with NSO.
-* Not adding the `root` user to any group matching the NACM groups results in zero access, as no NACM rule will match, and the default in the `aaa_init.xml` file is to deny all access.
-
-The NSO NACM functionality is based on the [Network Configuration Access Control Model](https://datatracker.ietf.org/doc/html/rfc8341) IETF RFC 8341 with NSO extensions augmented by `tailf-acm.yang`. See [AAA infrastructure](../../management/aaa-infrastructure.md), for more details.
-
-The manager in this example logs into the different NSO hosts using the Linux user login credentials. This scheme has many advantages, mainly because all audit logs on the NSO hosts will show who did what and when. Therefore, the common bad practice of having a shared `admin` Linux user and NSO local user with a shared password is not recommended.
-
-{% hint style="info" %}
-The default `aaa_init.xml` file provided with the NSO system installation must not be used as-is in a deployment without reviewing and verifying that every NACM rule in the file matches the desired authorization level.
-{% endhint %}
-
-## The High Availability and VIP Configuration
-
-This example sets up one HA cluster using HA Raft or rule-based HA with the `tailf-hcc` server to manage virtual IP addresses. See [NSO Rule-based HA](../../management/high-availability.md) and [Tail-f HCC Package](../../management/high-availability.md#ug.ha.hcc) for details.
-
-The NSO HA, together with the `tailf-hcc` package, provides three features:
-
-* All CDB data is replicated from the leader/primary to the follower/secondary nodes.
-* If the leader/primary fails, a follower/secondary takes over and starts to act as leader/primary. This is how HA Raft works and how the rule-based HA variant of this example is configured to handle failover automatically.
-* At failover, `tailf-hcc` sets up a virtual alias IP address on the leader/primary node only and uses gratuitous ARP packets to update all nodes in the network with the new mapping to the leader/primary node.
-
-Nodes in other networks can be updated using the `tailf-hcc` layer-3 BGP functionality or a load balancer. See the `load-balancer`and `hcc`examples in the NSO example set under [examples.ncs/high-availability](https://github.com/NSO-developer/nso-examples/tree/6.6/high-availability).
-
-See the NSO example set under [examples.ncs/high-availability/hcc](https://github.com/NSO-developer/nso-examples/tree/6.6/high-availability/hcc) for a reference to an HA Raft and rule-based HA `tailf-hcc` Layer 3 BGP examples.
-
-The HA Raft and rule-based HA upgrade-l2 examples also demonstrate HA failover, upgrading the NSO version on all nodes, and upgrading NSO packages on all nodes.
-
-## Global Settings and Timeouts
-
-Depending on your installation, e.g., the size and speed of the managed devices and the characteristics of your service applications, some default values of NSO may have to be tweaked, particularly some of the timeouts.
-
-* Device timeouts. NSO has connect, read, and write timeouts for traffic between NSO and the managed devices. The default value may not be sufficient if devices/nodes are slow to commit, while some are sometimes slow to deliver their full configuration. Adjust timeouts under `/devices/global-settings` accordingly.
-* Service code timeouts. Some service applications can sometimes be slow. Adjusting the `/services/global-settings/service-callback-timeout` configuration might be applicable depending on the applications. However, the best practice is to change the timeout per service from the service code using the Java `ServiceContext.setTimeout` function or the Python `data_set_timeout` function.
-
-There are quite a few different global settings for NSO. The two mentioned above often need to be changed.
-
-## Cisco Smart Licensing
-
-NSO uses Cisco Smart Licensing, which is described in detail in [Cisco Smart Licensing](../../management/system-management/cisco-smart-licensing.md). After registering your NSO instance(s), and receiving a token, following steps 1-6 as described in the [Create a License Registration Token](../../management/system-management/cisco-smart-licensing.md#d5e2927) section of Cisco Smart Licensing, enter a token from your Cisco Smart Software Manager account on each host. Use the same token for all instances and script entering the token as part of the initial NSO configuration or from the management node:
-
-```bash
-admin@nso-paris# license smart register idtoken YzY2Yj...
-admin@nso-london# license smart register idtoken YzY2Yj...
-```
-
-{% hint style="info" %}
-The Cisco Smart Licensing CLI command is present only in the Cisco Style CLI, which is the default CLI for this setup.
-{% endhint %}
-
-## Log Management
-
-### Log Rotate
-
-The NSO system installations performed on the nodes in the HA cluster also install defaults for **logrotate**. Inspect `/etc/logrotate.d/ncs` and ensure that the settings are what you want. Note that the NSO error logs, i.e., the files `/var/log/ncs/ncserr.log*`, are internally rotated by NSO and must not be rotated by `logrotate`.
-
-### Syslog
-
-For the HA Raft and rule-based HA upgrade-l2 examples, see the reference from the `README` in the [examples.ncs/high-availability/hcc](https://github.com/NSO-developer/nso-examples/tree/6.6/high-availability/hcc) example directory; the examples integrate with `rsyslog` to log the `ncs`, `developer`, `upgrade`, `audit`, `netconf`, `snmp`, and `webui-access` logs to syslog with `facility` set to `daemon` in `ncs.conf`.
-
-`rsyslogd` on the nodes in the HA cluster is configured to write the daemon facility logs to `/var/log/daemon.log`, and forward the daemon facility logs with the severity `info` or higher to the manager node's `/var/log/ha-cluster.log` syslog.
-
-### Audit Network Log and NED Traces
-
-Use the audit-network-log for recording southbound traffic towards devices. Enable by setting `/ncs-config/logs/audit-network-log/enabled` and `/ncs-config/logs/audit-network-log/file/enabled` to true in `$NCS_CONFIG_DIR/ncs.conf`, See [ncs.conf(5)](../../../resources/man/ncs.conf.5.md) in Manual Pages for more information.
-
-NED trace logs are a crucial tool for debugging NSO installations and not recommended for deployment. These logs are very verbose and for debugging only. Do not enable these logs in production.
-
-Note that the NED logs include everything, even potentially sensitive data is logged. No filtering is done. The NED trace logs are controlled through the CLI under: `/device/global-settings/trace`. It is also possible to control the NED trace on a per-device basis under `/devices/device[name='x']/trace`.
-
-There are three different settings for trace output. For various historical reasons, the setting that makes the most sense depends on the device type.
-
-* For all CLI NEDs, use the `raw` setting.
-* For all ConfD and netsim-based NETCONF devices, use the pretty setting. This is because ConfD sends the NETCONF XML unformatted, while `pretty` means that the XML is formatted.
-* For Juniper devices, use the `raw` setting. Juniper devices sometimes send broken XML that cannot be formatted appropriately. However, their XML payload is already indented and formatted.
-* For generic NED devices - depending on the level of trace support in the NED itself, use either `pretty` or `raw`.
-* For SNMP-based devices, use the `pretty` setting.
-
-Thus, it is usually not good enough to control the NED trace from `/devices/global-settings/trace`.
-
-### Python Logs
-
-While there is a global log for, for example, compilation errors in `/var/log/ncs/ncs-python-vm.log`, logs from user application packages are written to separate files for each package, and the log file naming is `ncs-python-vm-`_`pkg_name`_`.log`. The level of logging from Python code is controlled on a per package basis. See [Debugging of Python packages](../../../development/core-concepts/nso-virtual-machines/nso-python-vm.md#debugging-of-python-packages) for more details.
-
-### Java Logs
-
-User application Java logs are written to `/var/log/ncs/ncs-java-vm.log`. The level of logging from Java code is controlled per Java package. See [Logging](../../../development/core-concepts/nso-virtual-machines/nso-java-vm.md#logging) in Java VM for more details.
-
-### Internal NSO Log
-
-The internal NSO log resides at `/var/log/ncs/ncserr.*`. The log is written in a binary format. To view the internal error log, run the following command:
-
-```bash
- $ ncs --printlog /var/log/ncs/ncserr.log.1
-```
-
-## Monitoring the Installation
-
-All large-scale deployments employ monitoring systems. There are plenty of good tools to choose from, open source and commercial. All good monitoring tools can script (using various protocols) what should be monitored. It is recommended that a special read-only Linux user without shell access be set up like the `oper` user earlier in this chapter. A few commonly used checks include:
-
-* At startup, check that NSO has been started using the `$NCS_DIR/bin/ncs_cmd -c "wait-start 2"` command.
-* Use the `ssh` command to verify SSH access to the NSO host and NSO CLI.
-* Check disk usage using, for example, the `df` utility.
-* For example, use **curl** or the Python requests library to verify that the RESTCONF API is accessible.
-* Check that the NETCONF API is accessible using, for example, the `$NCS_DIR/bin/netconf-console` tool with a `hello` message.
-* Verify the NSO version using, for example, the `$NCS_DIR/bin/ncs --version` or RESTCONF `/restconf/data/tailf-ncs-monitoring:ncs-state/version`.
-* Check if HA is enabled using, for example, RESTCONF `/restconf/data/tailf-ncs-monitoring:ncs-state/ha`.
-
-### Alarms
-
-RESTCONF can be used to view the NSO alarm table and subscribe to alarm notifications. NSO alarms are not events. Whenever an NSO alarm is created, a RESTCONF notification and SNMP trap are also sent, assuming that you have a RESTCONF client registered with the alarm stream or configured a proper SNMP target. Some alarms, like the rule-based HA `ha-secondary-down` alarm, require the intervention of an operator. Thus, a monitoring tool should also fetch the NSO alarm list.
-
-```bash
-$ curl -ik -H "X-Auth-Token: TsZTNwJZoYWBYhOPuOaMC6l41CyX1+oDaasYqQZqqok=" \
-https://paris:8888/restconf/data/tailf-ncs-alarms:alarms
-```
-
-Or subscribe to the `ncs-alarms` RESTCONF notification stream.
-
-### Metric - Counters, Gauges, and Rate of Change Gauges
-
-NSO metric has different contexts all containing different counters, gauges, and rate of change gauges. There is a `sysadmin`, a `developer` and a `debug` context. Note that only the `sysadmin` context is enabled by default, as it is designed to be lightweight. Consult the YANG module `tailf-ncs-metric.yang` to learn the details of the different contexts.
-
-### **Counters**
-
-You may read counters by e.g. CLI, as in this example:
-
-```bash
-admin@ncs# show metric sysadmin counter session cli-total
-metric sysadmin counter session cli-total 1
-```
-
-### **Gauges**
-
-You may read gauges by e.g. CLI, as in this example:
-
-```bash
-admin@ncs# show metric sysadmin gauge session cli-open
-metric sysadmin gauge session cli-open 1
-```
-
-### **Rate of Change Gauges**
-
-You may read rate of change gauges by e.g. CLI, as in this example:
-
-```bash
-admin@ncs# show metric sysadmin gauge-rate session cli-open
-NAME RATE
--------------
-1m 0.0
-5m 0.2
-15m 0.066
-```
-
-## Security Considerations
-
-This section covers security considerations for this example. See [Secure Deployment Considerations](secure-deployment.md) for a general description.
-
-The presented configuration enables the built-in web server for the WebUI and RESTCONF interfaces. It is paramount for security that you only enable HTTPS access with `/ncs-config/webui/match-host-name` and `/ncs-config/webui/server-name` properly set.
-
-The AAA setup described so far in this deployment document is the recommended AAA setup. To reiterate:
-
-* Have all users that need access to NSO authenticated through Linux PAM. This may then be through `/etc/passwd`. Avoid storing users in CDB.
-* Given the default NACM authorization rules, you should have three different types of users on the system.
- * Users with shell access are members of the `ncsadmin` Linux group and are considered fully trusted because they have full access to the system.
- * Users without shell access who are members of the `ncsadmin` Linux group have full access to the network. They have access to the NSO SSH shell and can execute RESTCONF calls, access the NSO CLI, make configuration changes, etc. However, they cannot manipulate backups or perform system upgrades unless such actions are added to by NSO applications.
- * Users without shell access who are members of the `ncsoper` Linux group have read-only access. They can access the NSO SSH shell, read data using RESTCONF calls, etc. However, they cannot change the configuration, manipulate backups, and perform system upgrades.
-
-If you have more fine-grained authorization requirements than read-write and read-only, additional Linux groups can be created, and the NACM rules can be updated accordingly. See [The `aaa_init.xml` Configuration](deployment-example.md#ug.admin_guide.deployment.aaa) from earlier in this chapter on how the reference example implements users, groups, and NACM rules to achieve the above.
-
-The default `aaa_init.xml` file must not be used as-is before reviewing and verifying that every NACM rule in the file matches the desired authorization level.
-
-For a detailed discussion of the configuration of authorization rules through NACM, see [AAA infrastructure](../../management/aaa-infrastructure.md), particularly the section [Authorization](../../management/aaa-infrastructure.md#ug.aaa.authorization).
-
-A considerably more complex scenario is when users require shell access to the host but are either untrusted or should not have any access to NSO at all. NSO listens to a so-called IPC socket configured through `/ncs-config/ncs-ipc-address`. This socket is typically limited to local connections and defaults to `127.0.0.1:4569` for security. The socket multiplexes several different access methods to NSO.
-
-The main security-related point is that no AAA checks are performed on this socket. If you have access to the socket, you also have complete access to all of NSO.
-
-To drive this point home, when you invoke the `ncs_cli` command, a small C program that connects to the socket and tells NSO who you are, NSO assumes that authentication has already been performed. There is even a documented flag `--noaaa`, which tells NSO to skip all NACM rule checks for this session.
-
-You must protect the socket to prevent untrusted Linux shell users from accessing the NSO instance using this method. This is done by using a file in the Linux file system. The file `/etc/ncs/ipc_access` gets created and populated with random data at install time. Enable `/ncs-config/ncs-ipc-access-check/enabled` in `ncs.conf` and ensure that trusted users can read the `/etc/ncs/ipc_access` file, for example, by changing group access to the file. See [ncs.conf(5)](../../../resources/man/ncs.conf.5.md) in Manual Pages for details.
-
-```bash
-$ cat /etc/ncs/ipc_access
-cat: /etc/ncs/ipc_access: Permission denied
-$ sudo chown root:ncsadmin /etc/ncs/ipc_access
-$ sudo chmod g+r /etc/ncs/ipc_access
-$ ls -lat /etc/ncs/ipc_access
-$ cat /etc/ncs/ipc_access
-.......
-```
-
-For an HA setup, HA Raft is based on the Raft consensus algorithm and provides the best fault tolerance, performance, and security. It is therefore recommended over the legacy rule-based HA variant. The `raft-upgrade-l2` project, referenced from the NSO example set under [examples.ncs/high-availability/hcc](https://github.com/NSO-developer/nso-examples/tree/6.6/high-availability/hcc), together with this Deployment Example section, describes a reference implementation. See [NSO HA Raft](../../management/high-availability.md#ug.ha.raft) for more HA Raft details.
diff --git a/administration/installation-and-deployment/deployment/develop-and-deploy-a-nano-service.md b/administration/installation-and-deployment/deployment/develop-and-deploy-a-nano-service.md
deleted file mode 100644
index 38d21775..00000000
--- a/administration/installation-and-deployment/deployment/develop-and-deploy-a-nano-service.md
+++ /dev/null
@@ -1,427 +0,0 @@
----
-description: Develop and deploy a nano service using a guided example.
----
-
-# Develop and Deploy a Nano Service
-
-This section shows how to develop and deploy a simple NSO nano service for managing the provisioning of SSH public keys for authentication. For more details on nano services, see [Nano Services for Staged Provisioning](../../../development/core-concepts/nano-services.md) in Development. The example showcasing development is available under [examples.ncs/getting-started/netsim-sshkey](https://github.com/NSO-developer/nso-examples/tree/6.6/getting-started/netsim-sshkey). In addition, there is a reference from the `README` in the example's directory to the deployment version of the example.
-
-## Development
-
-
The Development Host Topology
-
-After installing NSO with the [Local Install](../local-install.md) option, development often begins with either retrieving an existing YANG model representing what the managed network element (a virtual or physical device, such as a router) can do or constructing a new YANG model that at least covers the configuration of interest to an NSO service. To enable NSO service development, the network element's YANG model can be used with NSO's netsim tool that uses ConfD (Configuration Daemon) to simulate the network elements and their management interfaces like NETCONF. Read more about netsim in [Network Simulator](../../../operation-and-usage/operations/network-simulator-netsim.md).
-
-The simple network element YANG model used for this example is available under `packages/ne/src/yang/ssh-authkey.yang`. The `ssh-authkey.yang` model implements a list of SSH public keys for identifying a user. The list of keys augments a list of users in the ConfD built-in `tailf-aaa.yang` module that ConfD uses to authenticate users.
-
-```yang
-module ssh-authkey {
- yang-version 1.1;
- namespace "http://example.com/ssh-authkey";
- prefix sa;
-
- import tailf-common {
- prefix tailf;
- }
-
- import tailf-aaa {
- prefix aaa;
- }
-
- description
- "List of SSH authorized public keys";
-
- revision 2023-02-02 {
- description
- "Initial revision.";
- }
-
- augment "/aaa:aaa/aaa:authentication/aaa:users/aaa:user" {
- list authkey {
- key pubkey-data;
- leaf pubkey-data {
- type string;
- }
- }
- }
-}
-```
-
-On the network element, a Python application subscribes to ConfD to be notified of configuration changes to the user's public keys and updates the user's authorized\_keys file accordingly. See `packages/ne/netsim/ssh-authkey.py` for details.
-
-The first step is to create an NSO package from the network element YANG model. Since NSO will use NETCONF over SSH to communicate with the device, the package will be a NETCONF NED. The package can be created using the `ncs-make-package` command or the NETCONF NED builder tool. The `ncs-make-package` command is typically used when the YANG models used by the network element are available. Hence, the packages/ne package for this example was generated using the `ncs-make-package` command.
-
-As the `ssh-authkey.yang` model augments the users list in the ConfD built-in `tailf-aaa.yang` model, NSO needs a representation of that YANG model too to build the NED. However, the service will only configure the user's public keys, so only a subset of the `tailf-aaa.yang` model that only includes the user list is sufficient. To compare, see the `packages/ne/src/yang/tailf-aaa.yang` in the example vs. the network element's version under `$NCS_DIR/netsim/confd/src/confd/aaa/tailf-aaa.yang`.
-
-Now that the network element package is defined, next up is the service package, beginning with finding out what steps are required for NSO to authenticate with the network element using SSH public key authentication:
-
-1. First, generate private and public keys using, for example, the `ssh-keygen` OpenSSH authentication key utility.
-2. Distribute the public keys to the ConfD-enabled network element's list of authorized keys.
-3. Configure NSO to use public key authentication with the network element.
-4. Finally, test the public key authentication by connecting NSO with the network element.
-
-The outline above indicates that the service will benefit from implementing several smaller (nano) steps:
-
-* The first step only generates private and public key files with no configuration. Thus, the first step should be implemented by an action before the second step runs, not as part of the second step transaction `create()` callback code configuring the network elements. The `create()` callback runs multiple times, for example, for service configuration changes, re-deploy, or commit dry-run. Therefore, generating keys should only happen when creating the service instance.
-* The third step cannot be executed before the second step is complete, as NSO cannot use the public key for authenticating with the network element before the network element has it in its list of authorized keys.
-* The fourth step uses the NSO built-in `connect()` action and should run after the third step finishes.
-
-What configuration input do the above steps need?
-
-* The name of the network element that will authenticate a user with an SSH public key.
-* The name of the local NSO user that maps to the remote network element user the public key authenticates.
-* The name of the remote network element user.
-* A passphrase is used for encrypting the private key, guarding its privacy. The passphrase should be encrypted when storing it in the CDB, just like any other password.
-* The name of the NSO authentication group to configure for public-key authentication with the NSO-managed network element.
-
-A service YANG model that implements the above configuration:
-
-```yang
- container pubkey-dist {
- list key-auth {
- key "ne-name local-user";
-
- uses ncs:nano-plan-data;
- uses ncs:service-data;
- ncs:servicepoint "distkey-servicepoint";
-
- leaf ne-name {
- type leafref {
- path "/ncs:devices/ncs:device/ncs:name";
- }
- }
- leaf local-user {
- type leafref {
- path "/ncs:devices/ncs:authgroups/ncs:group/ncs:umap/ncs:local-user";
- require-instance false;
- }
- }
- leaf remote-name {
- type leafref {
- path "/ncs:devices/ncs:authgroups/ncs:group/ncs:umap/ncs:remote-name";
- require-instance false;
- }
- mandatory true;
- }
- leaf authgroup-name {
- type leafref {
- path "/ncs:devices/ncs:authgroups/ncs:group/ncs:name";
- require-instance false;
- }
- mandatory true;
- }
- leaf passphrase {
- // Leave unset for no passphrase
- tailf:suppress-echo true;
- type tailf:aes-256-cfb-128-encrypted-string {
- length "10..max" {
- error-message "The passphrase must be at least 10 characters long";
- }
- pattern ".*[a-z]+.*" {
- error-message "The passphrase must have at least one lower case alpha";
- }
- pattern ".*[A-Z]+.*" {
- error-message "The passphrase must have at least one upper case alpha";
- }
- pattern ".*[0-9]+.*" {
- error-message "The passphrase must have at least one digit";
- }
- pattern ".*[<>~;:!@#/$%^&*=-]+.*" {
- error-message "The passphrase must have at least one of these" +
- " symbols: [<>~;:!@#/$%^&*=-]+";
- }
- pattern ".* .*" {
- modifier invert-match;
- error-message "The passphrase must have no spaces";
- }
- }
- }
- ...
- }
- }
-```
-
-For details on the YANG statements used by the YANG model, such as `leaf`, `container`, `list`, `leafref`, `mandatory`, `length`, `pattern`, etc., see the [IETF RFC 7950](https://www.rfc-editor.org/rfc/rfc7950) that documents the YANG 1.1 Data Modeling Language. The `tailf:xyz` are YANG extension statements documented by [tailf\_yang\_extensions(5)](../../../resources/man/tailf_yang_extensions.5.md) in Manual Pages.
-
-The service configuration is implemented in YANG by a `key-auth` list where the network element and local user names are the list keys. In addition, the list has a `distkey-servicepoint` service point YANG extension statement to enable the list parameters used by the Python service callbacks that this example implements. Finally, the used `service-data` and `nano-plan-data` groupings add the common definitions for a service and the plan data needed when the service is a nano service.
-
-For the nano service YANG part, an NSO YANG nano service behavior tree extension that references a plan outline extension implements the above steps for setting up SSH public key authentication with a network element:
-
-```
- ncs:plan-outline distkey-plan {
- description "Plan for distributing a public key";
- ncs:component-type "dk:ne" {
- ncs:state "ncs:init";
- ncs:state "dk:generated" {
- ncs:create {
- // Request the generate-keys action
- ncs:post-action-node "$SERVICE" {
- ncs:action-name "generate-keys";
- ncs:result-expr "result = 'true'";
- ncs:sync;
- }
- }
- ncs:delete {
- // Request the delete-keys action
- ncs:post-action-node "$SERVICE" {
- ncs:action-name "delete-keys";
- ncs:result-expr "result = 'true'";
- }
- }
- }
- ncs:state "dk:distributed" {
- ncs:create {
- // Invoke a Python program to distribute the authorized public key to
- // the network element
- ncs:nano-callback;
- ncs:force-commit;
- }
- }
- ncs:state "dk:configured" {
- ncs:create {
- // Invoke a Python program that in turn invokes a service template to
- // configure NSO to use public key authentication with the network
- // element
- ncs:nano-callback;
- // Request the connect action to test the public key authentication
- ncs:post-action-node "/ncs:devices/device[name=$NE-NAME]" {
- ncs:action-name "connect";
- ncs:result-expr "result = 'true'";
- }
- }
- }
- ncs:state "ncs:ready";
- }
- }
- ncs:service-behavior-tree distkey-servicepoint {
- description "One component per distkey behavior tree";
- ncs:plan-outline-ref "dk:distkey-plan";
- ncs:selector {
- // The network element name used with this component
- ncs:variable "NE-NAME" {
- ncs:value-expr "current()/ne-name";
- }
- // The unique component name
- ncs:variable "NAME" {
- ncs:value-expr "concat(current()/ne-name, '-', current()/local-user)";
- }
- // Component for setting up public key authentication
- ncs:create-component "$NAME" {
- ncs:component-type-ref "dk:ne";
- }
- }
- }
-```
-
-The nano `service-behavior-tree` for the service point creates a nano service component for each list entry in the `key-auth` list. The last connection verification step of the nano service, the `connected` state, uses the `NE-NAME` variable. The `NAME` variable concatenates the `ne-name` and `local-user` keys from the `key-auth` list to create a unique nano service component name.
-
-The only step that requires both a create and delete part is the `generated` state action that generates the SSH keys. If a user deletes a service instance and another network element does not currently use the generated keys, this deletes the keys too. NSO will revert the configuration automatically as part of the FASTMAP algorithm. Hence, the service list instances also need actions for generating and deleting keys.
-
-```yang
- container pubkey-dist {
- list key-auth {
- key "ne-name local-user";
- ...
- action generate-keys {
- tailf:actionpoint generate-keys;
- output {
- leaf result {
- type boolean;
- }
- }
- }
- action delete-keys {
- tailf:actionpoint delete-keys;
- output {
- leaf result {
- type boolean;
- }
- }
- }
- }
- }
-```
-
-The actions have no input statements, as the input is the configuration in the service instance list entry.
-
-The `generated` state uses the `ncs:sync` statement to ensure that the keys exist before the `distributed` state runs. Similarly, the `distributed` state uses the `force-commit` statement to commit the configuration to the NSO CDB and the network elements before the `configured` state runs.
-
-See the `packages/distkey/src/yang/distkey.yang` YANG model for the nano service behavior tree, plan outline, and service configuration implementation.
-
-Next, handling the key generation, distributing keys to the network element, and configuring NSO to authenticate using the keys with the network element requires some code, here written in Python, implemented by the `packages/distkey/python/distkey/distkey-app.py` script application.
-
-The Python script application defines a Python `DistKeyApp` class specified in the `packages/distkey/package-meta-data.xml` file that NSO starts in a Python thread. This Python class inherits `ncs.application.Application` and implements the `setup()` and `teardown()` methods. The `setup()` method registers the nano service `create()` callbacks and the action handlers for generating and deleting the key files. Using the nano service state to separate the two nano service `create()` callbacks for the distribution and NSO configuration of keys, only one Python class, the `DistKeyServiceCallbacks` class, is needed to implement them.
-
-```python
-class DistKeyApp(ncs.application.Application):
- def setup(self):
- # Nano service callbacks require a registration for a service point,
- # component, and state, as specified in the corresponding data model
- # and plan outline.
- self.register_nano_service('distkey-servicepoint', # Service point
- 'dk:ne', # Component
- 'dk:distributed', # State
- DistKeyServiceCallbacks)
- self.register_nano_service('distkey-servicepoint', # Service point
- 'dk:ne', # Component
- 'dk:configured', # State
- DistKeyServiceCallbacks)
-
- # Side effect action that uses ssh-keygen to create the keyfiles
- self.register_action('generate-keys', GenerateActionHandler)
- # Action to delete the keys created by the generate keys action
- self.register_action('delete-keys', DeleteActionHandler)
-
- def teardown(self):
- self.log.info('DistKeyApp FINISHED')
-```
-
-The action for generating keys calls the OpenSSH `ssh-keygen` command to generate the private and public key files. Calling `ssh-keygen` is kept out of the service `create()` callback to avoid the key generation running multiple times, for example, for service changes, re-deploy, or dry-run commits. Also, NSO encrypts the passphrase used when generating the keys for added security, see the YANG model, so the Python code decrypts it before using it with the `ssh-keygen` command.
-
-```python
-class GenerateActionHandler(Action):
- @Action.action
- def cb_action(self, uinfo, name, keypath, ainput, aoutput, trans):
- '''Action callback'''
- service = ncs.maagic.get_node(trans, keypath)
- # Install the crypto keys used to decrypt the service passphrase leaf
- # as input to the key generation.
- with ncs.maapi.Maapi() as maapi:
- _maapi.install_crypto_keys(maapi.msock)
- # Decrypt the passphrase leaf for use when generating the keys
- encrypted_passphrase = service.passphrase
- decrypted_passphrase = _ncs.decrypt(str(encrypted_passphrase))
- aoutput = True
- # If it does not exist already, generate a private and public key
- if os.path.isfile(f'./{service.local_user}_ed25519') == False:
- result = subprocess.run(['ssh-keygen', '-N',
- f'{decrypted_passphrase}', '-t', 'ed25519',
- '-f', f'./{service.local_user}_ed25519'],
- stdout=subprocess.PIPE, check=True,
- encoding='utf-8')
- if "has been saved" not in result.stdout:
- aoutput = False
-```
-
-The `DeleteActionHandler` action deletes the key files if no more network elements use the user's keys:
-
-```python
-class DeleteActionHandler(Action):
- @Action.action
- def cb_action(self, uinfo, name, keypath, ainput, aoutput, trans):
- '''Action callback'''
- service = ncs.maagic.get_node(trans, keypath)
- # Only delete the key files if no more network elements use this
- # user's keys
- cur = trans.cursor('/pubkey-dist/key-auth')
- remove_key = True
- while True:
- try:
- value = next(cur)
- if value[0] != service.ne_name and value[1] == service.local_user:
- remove_key = False
- break
- except StopIteration:
- break
- aoutput = True
- if remove_key is True:
- try:
- os.remove(f'./{service.local_user}_ed25519.pub')
- os.remove(f'./{service.local_user}_ed25519')
- except OSError as e:
- if e.errno != errno.ENOENT:
- aoutput = False
-```
-
-The Python class for the nano service `create()` callbacks handles both the distribution and NSO configuration of the keys. The `dk:distributed` state `create()` callback code adds the public key data to the network element's list of authorized keys. For the `create()` call for the `dk:configured` state, a template is used to configure NSO to use public key authentication with the network element. The template can be called directly from the nano service, but in this case, it needs to be called from the Python code to input the current working directory to the template:
-
-```python
-class DistKeyServiceCallbacks(NanoService):
- @NanoService.create
- def cb_nano_create(self, tctx, root, service, plan, component, state,
- proplist, component_proplist):
- '''Nano service create callback'''
- if state == 'dk:distributed':
- # Distribute the public key to the network element's authorized
- # keys list
- with open(f'./{service.local_user}_ed25519.pub', 'r') as f:
- pubkey_data = f.read()
- config = root.devices.device[service.ne_name].config
- users = config.aaa.authentication.users
- users.user[service.local_user].authkey.create(pubkey_data)
- elif state == 'dk:configured':
- # Configure NSO to use a public key for authentication with
- # the network element
- template_vars = ncs.template.Variables()
- template_vars.add('CWD', os.getcwd())
- template = ncs.template.Template(service)
- template.apply('distkey-configured', template_vars)
-```
-
-The template to configure NSO to use public key authentication with the network element is available under `packages/distkey/templates/distkey-configured.xml`:
-
-```xml
-
-
-
-
- {authgroup-name}
-
- {local-user}
- {remote-name}
-
-
-
- {$CWD}/{local-user}_ed25519
- {passphrase}
-
-
-
-
-
-
-
- {ne-name}
- {authgroup-name}
-
-
-}
-```
-
-The example uses three scripts to showcase the nano service:
-
-* A shell script, `showcase.sh`, which uses the `ncs_cli` program to run CLI commands via the NSO IPC port.
-* A Python script, `showcase-rc.sh`, which uses the `requests` package for RESTCONF edit operations and receiving event notifications.
-* A Python script that uses NSO MAAPI, `showcase-maapi.sh`, via the NSO IPC port.
-
-The `ncs_cli` program identifies itself with NSO as the `admin` user without authentication, and the RESTCONF client uses plain HTTP and basic user password authentication. All three scripts demonstrate the service by generating keys, distributing the public key, and configuring NSO for public key authentication with the network elements. To run the example, see the instructions in the `README` file of the example.
-
-## Deployment
-
-See the `README` in the `netsim-sshkey` example's directory for a reference to an NSO system installation in a container deployment variant.
-
-
The Deployment Container Topology
-
-The deployment variant differs from the development example by:
-
-* Installing NSO with a system installation for deployment instead of a local installation suitable for development
-* Addressing NSO security by running NSO as the `admin` user and authenticating using a public key and token.
-* Rotating NSO logs to avoid running out of disk space
-* Installing the `distkey` service package and `ne` NED package at startup
-* The NSO CLI showcase script uses SSH with public key authentication instead of the **ncs\_cli** program over unsecured IPC
-* There is no Python MAAPI showcase script. Use RESTCONF over HTTPS with Python instead of Python MAAPI over unsecured IPC.
-* Having NSO and the network elements (simulated by the ConfD subscriber application) run in separate containers
-* NSO is either pre-installed in the NSO production container image or installed in a generic Linux container.
-
-The deployment example sets up a minimal production installation where the NSO process runs as the `admin` OS user, relying on PAM authentication for the `admin` and `oper` NSO users. The `admin` user is authenticated over SSH using a public key for CLI and NETCONF access and over RESTCONF HTTPS using a token. The read-only `oper` user uses password authentication. The `oper` user can access the NSO WebUI over HTTPS port 443 from the container host.
-
-A modified version of the NSO configuration file `ncs.conf` from the example running with a local install NSO is located in the `$NCS_CONFIG_DIR` (`/etc/ncs`) directory. The `packages`, `ncs-cdb`, `state`, and `scripts` directories are now under the `$NCS_RUN_DIR` (`/var/opt/ncs`) directory. The log directory is now the `$NCS_LOG_DIR` (`/var/log/ncs`) directory. Finally, the `$NCS_DIR` variable points to `/opt/ncs/current`.
-
-Two scripts showcase the nano service:
-
-* A shell script that runs NSO CLI commands over SSH.
-* A Python script that uses the `requests` package to perform edit operations and receive event notifications.
-
-As with the development version, both scripts will demo the service by generating keys, distributing the public key, and configuring NSO for public key authentication with the network elements.
-
-To run the example and for more details, see the instructions in the `README` file of the [examples.ncs/getting-started/netsim-sshkey](https://github.com/NSO-developer/nso-examples/tree/6.6/getting-started/netsim-sshkey) deployment example.
diff --git a/administration/installation-and-deployment/deployment/secure-deployment.md b/administration/installation-and-deployment/deployment/secure-deployment.md
deleted file mode 100644
index 04ba84fe..00000000
--- a/administration/installation-and-deployment/deployment/secure-deployment.md
+++ /dev/null
@@ -1,199 +0,0 @@
----
-description: Security features to consider for NSO deployment.
----
-
-# Secure Deployment
-
-When deploying NSO in production environments, security should be a primary consideration. This section guides the NSO features available for securing your NSO deployment.
-
-## Development vs. Production Deployment
-
-NSO installations can be configured for development or production use, with significantly different security implications.
-
-### Production Installation
-
-* Use the NSO Installer with the `--system-install` option for production deployments.
- * The `--local-install` option should only be used for development environments.
- * Use the NSO Installer `--run-as-user ` option to run NSO as a non-root user.
-* Never use `ncs.conf` files from NSO distribution examples in production.
- * Evaluate and customize the default `ncs.conf` file provided with a system installation to meet your specific security requirements.
-
-### Key Configuration Differences
-
-The default `ncs.conf` for production installations differs from the development default `ncs.conf` in several critical security areas:
-
-#### Encryption Keys
-
-* Production (system) installations use external key management where `ncs.conf` points to `${NCS_CONFIG_DIR}/ncs.crypto_keys` using the `${NCS_DIR}/bin/ncs_crypto_keys` command to retrieve them.
-* Development installations include the encryption keys directly in `ncs.conf`.
-
-#### SSH Configuration
-
-* Production restricts SSH host key algorithms to `ssh-ed25519` only.
-* Development allows multiple algorithms for compatibility.
-
-#### Authentication
-
-* Production disables local authentication by default, using PAM with `system-auth`.
-* Development enables local authentication and uses PAM with `common-auth`.
-* Production includes password expiration warnings.
-
-#### Network Interfaces
-
-* Production disables CLI SSH, HTTP WebUI, and NETCONF SSH by default.
-* Development enables these interfaces for convenience.
-* Production enables restricted-file-access for CLI.
-
-See [ncs.conf(5)](../../../resources/man/ncs.conf.5.md) for all available options to configure the NSO daemon.
-
-## Eliminating Root Access
-
-Running NSO with minimal privileges is a fundamental security best practice:
-
-* Use the NSO installer `--run-as-user User` option to run NSO as a non-root user.
-* Some files are installed with elevated privileges - refer to the [ncs-installer(1)](../../../resources/man/ncs-installer.1.md#system-installation) man page under the `--run-as-user User` option for details.
-* The NSO production container runs NSO from a [non-root user](../containerized-nso.md#nso-runs-from-a-non-root-user).
-* If the CLI is used and we want to create CLI commands that run executables, we may want to modify the permissions of the `$NCS_DIR/lib/ncs/lib/confd-*/priv/cmdptywrapper` program.\
- To be able to run an executable as root or a specific user, we need to make `cmdptywrapper` `setuid` `root`, i.e.:
-
- 1. `# chown root cmdptywrapper`
- 2. `# chmod u+s cmdptywrapper`
-
- Failing that, all programs will be executed as the user running the `ncs` daemon. Consequently, if that user is the `root`, we do not have to perform the `chmod` operations above. The same applies to executables run via actions, but then we may want to modify the permissions of the `$NCS_DIR/lib/ncs/lib/confd-*/priv/cmdwrapper` program instead:
-
- 1. `# chown root cmdwrapper`
- 2. `# chmod u+s cmdwrapper`
-* The deployment variant referenced in the README file of the [examples.ncs/getting-started/netsim-sshkey](https://github.com/NSO-developer/nso-examples/tree/6.6/getting-started/netsim-sshkey) example provides a native and NSO production container based example.
-
-## Authentication, Authorization, and Accounting (AAA)
-
-### PAM Authentication
-
-PAM (Pluggable Authentication Modules) is the recommended authentication method for NSO:
-
-* Group assignments based on the OS group database `/etc/group`.
-* Default NACM (Network Configuration Access Control Module) settings provide two groups:
- * `ncsadmin`: unlimited access rights.
- * `ncsoper`: minimal access rights (read-only).
-
-See [PAM](../../management/aaa-infrastructure.md#ug.aaa.pam) for details.
-
-### Customizing AAA Configuration
-
-When customizing the default `aaa_init.xml` configuration:
-
-* Exclude credentials unless local authentication is explicitly enabled.
-* Never use default passwords.
-* Carefully consider which groups can modify NACM rules.
-* Tailor NACM settings for user groups based on the principle of least privilege.
-
-See [AAA Infrastructure](../../management/aaa-infrastructure.md) for details.
-
-### Additional Authentication Methods
-
-* CLI and NETCONF: SSH public key authentication.
-* RESTCONF: Token, JWT, LDAP, or TACACS+ authentication.
-* WebUI: HTTPS (TLS) with JSON-RPC SSO (Single Sign-On).
-
-{% hint style="info" %}
-Disable unused interfaces in `ncs.conf` to reduce the attack surface.
-{% endhint %}
-
-See [Authentication](../../management/aaa-infrastructure.md#ug.aaa.authentication) for details.
-
-## Securing IPC Access
-
-Inter-Process Communication (IPC) security is crucial for safeguarding NSO's extensibility SDK API communications. Since the IPC socket allows full control of the system, it is important to ensure that only trusted or authorized clients can connect. See [Restricting Access to the IPC Socket](../../advanced-topics/ipc-connection.md#restricting-access-to-the-ipc-socket).
-
-Examples of programs that connect using IPC sockets:
-
-* Remote commands, such as `ncs --reload`.
-* MAAPI, CDB, DP, event notification API clients.
-* The `ncs_cli` program.
-* The `ncs_cmd` and `ncs_load` programs.
-
-### Default Security
-
-* Only local connections to IPC sockets are allowed by default.
-* TCP sockets with no authentication.
-
-### Best Practices
-
-* Use Unix sockets for authenticating the client based on the UID of the other end of the socket connection.
- * Root and the user NSO runs from always have access.
- * If using TCP sockets, configure NSO to use access checks with a pre-shared key.
- * If enabling non-localhost IPC over TCP sockets, implement encryption.
-
-See [Authenticating IPC Access](../../management/aaa-infrastructure.md#authenticating-ipc-access) for details.
-
-## Southbound Interface Security
-
-Secure communication with managed devices:
-
-* Use [Cisco-provided NEDs](../../management/ned-administration.md) when possible.
-* Refer to the [examples.ncs/getting-started/netsim-sshkey](https://github.com/NSO-developer/nso-examples/tree/6.6/getting-started/netsim-sshkey) README, which references a deployment variant of the example for SSH key update patterns using nano services.
-
-## Cryptographic Key Management
-
-### Hashing Algorithms
-
-* Set the `ncs.conf` `/ncs-config/crypt-hash/algorithm` to SHA-512 for password hashing.
- * Used by the `ianach:crypt-hash` type for secure password storage.
-
-### Encryption Keys
-
-* Generate new encryption keys before or at startup.
-* Replace or rotate keys generated by the NSO installer.
- * Rotate keys periodically.
-* Store keys securely (default location: `/etc/ncs/ncs.crypto_keys`).
-* The `ncs.crypto_keys` file contains the highly sensitive encryption keys for all encrypted CDB data.
-
-See [Cryptographic Keys](../../advanced-topics/cryptographic-keys.md) for details.
-
-## Rate Limiting and Resource Protection
-
-Implement various limiting mechanisms to prevent resource exhaustion:
-
-### NSO Configuration Limits
-
-NSO can be configured with some limits from `ncs.conf`:
-
-* `/ncs-config/session-limits`: Limit concurrent sessions.
-* `/ncs-config/transaction-limits`: Limit concurrent transactions.
-* `/ncs-config/parser-limits`: Limit XML data parsing.
-* `/ncs-config/webui/transport/unauthenticated-message-limit`: Limit unauthenticated message size.
-* `/ncs-config/webui/rate-limiting`: Limit JSON-RPC requests per hour.
-
-### External Rate Limiting
-
-For additional protection, implement rate limiting at the network level using tools like Linux `iptables`.
-
-## High Availability Security
-
-When deploying NSO in [HA (High Availability)](../../management/high-availability.md) configurations:
-
-* RAFT HA:
- * Uses encrypted TLS with mutual X.509 authentication.
-* Rule-based HA:
- * Unencrypted communication.
- * Shared token for authentication between HA group nodes.
-
-{% hint style="info" %}
-Encrypted strings for all encrypted CDB data, default stored in `/etc/ncs/ncs.crypto_keys`, must be identical across nodes
-{% endhint %}
-
-## Compliance Reporting
-
-NSO provides comprehensive [compliance reporting](../../../operation-and-usage/operations/compliance-reporting.md) capabilities:
-
-* Track user actions - "Who has done what?"
-* Verify network configuration compliance.
-* Generate audit reports for regulatory requirements.
-
-## FIPS Mode
-
-For enhanced security and regulatory compliance:
-
-* FIPS mode restricts NSO to use only FIPS 140-3 validated cryptographic modules.
-* Enable with the `--fips-install` option during [installation](../system-install.md).
-* Required for certain government and regulated industry deployments.
diff --git a/administration/installation-and-deployment/development-to-production-deployment/README.md b/administration/installation-and-deployment/development-to-production-deployment/README.md
deleted file mode 100644
index 82f5cc47..00000000
--- a/administration/installation-and-deployment/development-to-production-deployment/README.md
+++ /dev/null
@@ -1,6 +0,0 @@
----
-description: Deploy NSO from development to production.
----
-
-# Development to Production Deployment
-
diff --git a/administration/installation-and-deployment/local-install.md b/administration/installation-and-deployment/local-install.md
deleted file mode 100644
index 6575bad2..00000000
--- a/administration/installation-and-deployment/local-install.md
+++ /dev/null
@@ -1,619 +0,0 @@
----
-description: >-
- Install NSO for non-production use, such as for development and training
- purposes.
----
-
-# Local Install
-
-## Installation Steps
-
-Complete the following activities in the given order to perform a Local Install of NSO.
-
-
-
-{% hint style="info" %}
-**Mode of Install**
-
-NSO Local Install can be installed in **standard mode** or in [**FIPS**](https://www.nist.gov/itl/publications-0/federal-information-processing-standards-fips)**-compliant mode**. Standard mode install supports a broader set of cryptographic algorithms, while the FIPS mode install restricts NSO to use only FIPS 140-3-validated cryptographic modules and algorithms for enhanced/regulated security and compliance. Use FIPS mode only in environments that require compliance with specific security standards, especially in U.S. federal agencies or regulated industries. For all other use cases, install NSO in standard mode.
-
-\* FIPS: Federal Information Processing Standards
-{% endhint %}
-
-### Step 1 - Fulfill System Requirements
-
-Start by setting up your system to install and run NSO.
-
-To install NSO:
-
-1. Fulfill at least the primary requirements.
-2. If you intend to build and run NSO examples, you also need to install additional applications listed under Additional Requirements.
-
-{% hint style="warning" %}
-Where requirements list a specific or higher version, there always exists a (small) possibility that a higher version introduces breaking changes. If in doubt whether the higher version is fully backwards compatible, always use the specific version.
-{% endhint %}
-
-
-
-Primary Requirements
-
-Primary requirements to do a Local Install include:
-
-* A system running Linux or macOS on either the `x86_64` or `ARM64` architecture for development. For [FIPS](https://www.nist.gov/itl/publications-0/federal-information-processing-standards-fips) mode, OS FIPS compliance may be required depending on your specific requirements.
-* GNU libc 2.24 or higher.
-* Java JRE 17 or higher. Used by Cisco Smart Licensing.
-* Required and included with many Linux/macOS distributions:
- * `tar` command. Unpack the installer.
- * `gzip` command. Unpack the installer.
- * `ssh-keygen` command. Generate SSH host key.
- * `openssl` command. Generate self-signed certificates for HTTPS.
- * `find` command. Used to find out if all required libraries are available.
- * `which` command. Used by the NSO package manager.
- * `libpam.so.0`. Pluggable Authentication Module library.
- * `libexpat.so.1`. EXtensible Markup Language parsing library.
- * `libz.so.1` version 1.2.7.1 or higher. Data compression library.
-
-
-
-
-
-Additional Requirements
-
-Additional requirements to, for example, build and run NSO examples/services include:
-
-* Java JDK 17 or higher.
-* Ant 1.9.8 or higher.
-* Python 3.10 or higher.
-* Python Setuptools is required to build the Python API.
-* Often installed using the Python package installer pip:
- * Python Paramiko 2.2 or higher. To use netconf-console.
- * Python requests. Used by the RESTCONF demo scripts.
-* `xsltproc` command. Used by the `support/ned-make-package-meta-data` command to generate the `package-meta-data.xml` file.
-* One of the following web browsers is required for NSO GUI capabilities. The version must be supported by the vendor at the time of release.
- * Safari
- * Mozilla Firefox
- * Microsoft Edge
- * Google Chrome
-* OpenSSH client applications. For example, the `ssh` and `scp` commands.
-
-
-
-
-
-FIPS Mode Entropy Requirements
-
-The following applies if you are running a container-based setup of your FIPS install:
-
-In containerized environments (e.g., Docker) that run on older Linux kernels (e.g., Ubuntu 18.04), `/dev/random` may block if the system’s entropy pool is low. This can lead to delays or hangs in FIPS mode, as cryptographic operations require high-quality randomness.
-
-To avoid this:
-
-* Prefer newer kernels (e.g., Ubuntu 22.04 or later), where entropy handling is improved to mitigate the issue.
-* Or, install an entropy daemon like Haveged on the Docker host to help maintain sufficient entropy.
-
-Check available entropy on the host system with:
-
-```bash
-cat /proc/sys/kernel/random/entropy_avail
-```
-
-A value of 256 or higher is generally considered safe. Reference: [Oracle blog post](https://blogs.oracle.com/linux/post/entropyavail-256-is-good-enough-for-everyone).
-
-
-
-### Step 2 - Download the Installer and NEDs
-
-To download the Cisco NSO installer and example NEDs:
-
-1. Go to the Cisco's official [Software Download](https://software.cisco.com/download/home) site.
-2. Search for the product "Network Services Orchestrator" and select the desired version.
-3. There are two versions of the NSO installer, i.e. for macOS and Linux systems. Download the desired installer.
-
-
-
-Identifying the Installer
-
-You need to know your system specifications (Operating System and CPU architecture) in order to choose the appropriate NSO installer.
-
-NSO is delivered as an OS/CPU-specific signed self-extractable archive. The signed archive file has the pattern `nso-VERSION.OS.ARCH.signed.bin` that after signature verification extracts the `nso-VERSION.OS.ARCH.installer.bin` archive file, where:
-
-* `VERSION` is the NSO version to install.
-* `OS` is the Operating System (`linux` for all Linux distributions and `darwin` for macOS).
-* `ARCH` is the CPU architecture, for example`x86_64`.
-
-
-
-### Step 3 - Unpack the Installer
-
-If your downloaded file is a `signed.bin` file, it means that it has been digitally signed by Cisco, and upon execution, you will verify the signature and unpack the `installer.bin`.
-
-If you only have `installer.bin`, skip to the next step.
-
-To unpack the installer:
-
-1. In the terminal, list the binaries in the directory where you downloaded the installer, for example:
-
- ```bash
- cd ~/Downloads
- ls -l nso*.bin
- -rw-r--r--@ 1 user staff 199M Dec 15 11:45 nso-6.0.darwin.x86_64.installer.bin
- -rw-r--r--@ 1 user staff 199M Dec 15 11:45 nso-6.0.darwin.x86_64.signed.bin
- ```
-2. Use the `sh` command to run the `signed.bin` to verify the certificate and extract the installer binary and other files. An example output is shown below.
-
- ```bash
- sh nso-6.0.darwin.x86_64.signed.bin
- # Output
- Unpacking...
- Verifying signature...
- Downloading CA certificate from http://www.cisco.com/security/pki/certs/crcam2.cer ...
- Successfully downloaded and verified crcam2.cer.
- Downloading SubCA certificate from http://www.cisco.com/security/pki/certs/innerspace.cer ...
- Successfully downloaded and verified innerspace.cer.
- Successfully verified root, subca and end-entity certificate chain.
- Successfully fetched a public key from tailf.cer.
- Successfully verified the signature of nso-6.0.darwin.x86_64.installer.bin using tailf.cer
- ```
-3. List the files to check if extraction was successful.
-
- ```bash
- ls -l
- # Output
- -rw-r--r-- 1 user staff 1.8K Nov 29 06:05 README.signature
- -rw-r--r-- 1 user staff 12K Nov 29 06:05 cisco_x509_verify_release.py
- -rwxr-xr-x 1 user staff 199M Nov 29 05:55 nso-6.0.darwin.x86_64.installer.bin
- -rw-r--r-- 1 user staff 256B Nov 29 06:05 nso-6.0.darwin.x86_64.installer.bin.signature
- -rwxr-xr-x@ 1 user staff 199M Dec 15 11:45 nso-6.0.darwin.x86_64.signed.bin
- -rw-r--r-- 1 user staff 1.4K Nov 29 06:05 tailf.cer
- ```
-
-
-
-Description of Unpacked Files
-
-The following contents are unpacked:
-
-* `nso-VERSION.OS.ARCH.installer.bin`: The NSO installer.
-* `nso-VERSION.OS.ARCH.installer.bin.signature`: Signature generated for the NSO image.
-* `tailf.cer`: An enclosed Cisco signed x.509 end-entity certificate containing the public key that is used to verify the signature.
-* `README.signature`: File with further details on the unpacked content and steps on how to run the signature verification program. To manually verify the signature, refer to the steps in this file.
-* `cisco_x509_verify_release.py`: Python program that can be used to verify the 3-tier x.509 certificate chain and signature.
-* Multiple `.tar.gz` files: Bundled packages, extending the base NSO functionality.
-* Multiple `.tar.gz.signature` files: Digital signatures for the bundled packages.
-
-Since NSO version 6.3, a few additional NSO packages are included. They contain the following platform tools:
-
-* HCC
-* Observability Exporter
-
-- Phased Provisioning
-- Resource Manager
-
-For platform tools documentation, refer to the individual package's `README` file or to the [online documentation](https://nso-docs.cisco.com/resources).
-
-**NED packages**
-
-The NED packages that are available with the NSO installation are NetSim-based example NEDs. These NEDs are used for NSO examples only.
-
-Fetch the latest production-grade NEDs from [Cisco Software Download](https://software.cisco.com/download/home) using the URLs provided on your NED license certificates.
-
-**Manual pages**
-
-The installation program unpacks the NSO manual pages from the documentation archive in `$NCS_DIR/man`. `ncsrc` makes an addition to `$MANPATH`, allowing you to use the `man` command to view them. The manual pages are available in PDF format and from the online documentation located on [NCS man-pages, Volume 1](../../resources/man/README.md) in Manual Pages.
-
-Following is a list of a few of the installed manual pages:
-
-* `ncs(1)`: Command to start and control the NSO daemon.
-* `ncsc(1)`: NSO Yang compiler.
-* `ncs_cli(1)`: Frontend to the NSO CLI engine.
-* `ncs-netsim(1)`: Command to create and manipulate a simulated network.
-* `ncs-setup(1)`: Command to create an initial NSO setup.
-* `ncs.conf`: NSO daemon configuration file format.
-
-For example, to view the manual page describing the NSO configuration file, you should type:
-
-```bash
-$ man ncs.conf
-```
-
-Apart from the manual pages, extensive information about command-line options can be obtained by running `ncs` and `ncsc` with the `--help` (abbreviated `-h`) flag.
-
-```bash
-$ ncs --help
-```
-
-```bash
-$ ncsc --help
-```
-
-**Installer help**
-
-Run the `sh nso-VERSION.darwin.x86_64.installer.bin --help` command to view additional help on running binaries. More details can be found in the [ncs-installer(1)](../../resources/man/ncs-installer.1.md) Manual Page included with NSO.
-
-Notice the two options for `--local-install` or `--system-install`. An example output is shown below.
-
-```bash
-sh nso-6.0.darwin.x86_64.installer.bin --help
-
-# Output
-This is the NCS installation script.
-Usage: ./nso-6.0.darwin.x86_64.installer.bin [--local-install] LocalInstallDir
-Installs NCS in the LocalInstallDir directory only.
-This is convenient for test and development purposes.
-Usage: ./nso-6.0.darwin.x86_64.installer.bin --system-install
-[--install-dir InstallDir]
-[--config-dir ConfigDir] [--run-dir RunDir] [--log-dir LogDir]
-[--run-as-user User] [--keep-ncs-setup] [--non-interactive]
-
-Does a system install of NCS, suitable for deployment.
-Static files are installed in InstallDir/ncs-.
-The first time --system-install is used, the ConfigDir,
-RunDir, and LogDir directories are also created and
-populated for config files, run-time state files, and log files,
-respectively, and an init script for start of NCS at system boot
-and user profile scripts are installed. Defaults are:
-
-InstallDir - /opt/ncs
-ConfigDir - /etc/ncs
-RunDir - /var/opt/ncs
-LogDir - /var/log/ncs
-
-By default, the system install will run NCS as the root user.
-If the --run-as-user option is given, the system install will
-instead run NCS as the given user. The user will be created if
-it does not already exist.
-If the --non-interactive option is given, the installer will
-proceed with potentially disruptive changes (e.g. modifying or
-removing existing files) without asking for confirmation.
-```
-
-
-
-### Step 4 - Run the Installer
-
-Local Install of NSO software is performed in a single user-specified directory, for example in your `$HOME` directory.
-
-{% hint style="success" %}
-It is always recommended to install NSO in a directory named as the version of the release, for example, if the version being installed is `6.1`, the directory should be `~/nso-6.1`.
-{% endhint %}
-
-To run the installer:
-
-1. Navigate to your Install Directory.
-2. Run the command given below to install NSO in your Install Directory. The `--local-install` parameter is optional. At this point, you can choose to install NSO in standard mode or in FIPS mode.
-
-{% tabs %}
-{% tab title="Standard Local Install" %}
-The standard mode is the regular NSO install and is suitable for most installations. FIPS is disabled in this mode.
-
-For standard NSO install, run the installer as below:
-
-```bash
-$ sh nso-VERSION.OS.ARCH.installer.bin $HOME/ncs-VERSION --local-install
-```
-
-An example output is shown below:
-
-{% code title="Example: Standard Local Install" %}
-```bash
-sh nso-6.0.darwin.x86_64.installer.bin --local-install ~/nso-6.0
-
-# Output
-INFO Using temporary directory /var/folders/90/n5sbctr922336_
-0jrzhb54400000gn/T//ncs_installer.93831 to stage NCS installation bundle
-INFO Unpacked ncs-6.0 in /Users/user/nso-6.0
-INFO Found and unpacked corresponding DOCUMENTATION_PACKAGE
-INFO Found and unpacked corresponding EXAMPLE_PACKAGE
-INFO Found and unpacked corresponding JAVA_PACKAGE
-INFO Generating default SSH hostkey (this may take some time)
-INFO SSH hostkey generated
-INFO Environment set-up generated in /Users/user/nso-6.0/ncsrc
-INFO NSO installation script finished
-INFO Found and unpacked corresponding NETSIM_PACKAGE
-INFO NCS installation complete
-```
-{% endcode %}
-{% endtab %}
-
-{% tab title="FIPS Local Install" %}
-FIPS mode creates a FIPS-compliant NSO install.
-
-FIPS mode should only be used for deployments that are subject to strict compliance regulations as the cryptographic functions are then confined to the CiscoSSL FIPS 140-3 module library.
-
-For FIPS-compliant NSO install, run the installer with the additional `--fips-install` flag. Afterwards, verify FIPS in `ncs.conf`.
-
-```bash
-$ sh nso-VERSION.OS.ARCH.installer.bin $HOME/ncs-VERSION --local-install --fips-install
-```
-
-{% hint style="info" %}
-**NSO Configuration for FIPS**
-
-Note the following as part of FIPS-specific configuration/install:
-
-1. The `ncs.conf` file is automatically configured to enable FIPS by setting the following flag:
-
-```xml
-
- true
-
-```
-
-2. Additional environment variables (`NCS_OPENSSL_CONF_INCLUDE`, `NCS_OPENSSL_CONF`, `NCS_OPENSSL_MODULES`) are configured in `ncsrc` for FIPS compliance.
-3. The default `crypto.so` is overwritten at install for FIPS compliance.
-
-Additionally, note that:
-
-* As certain algorithms typically available with CiscoSSL are not included in the FIPS 140-3 validated module (and therefore disabled in FIPS mode), you need to configure NSO to use only the algorithms and cryptographic suites available through the CiscoSSL FIPS 140-3 object module.
-* With FIPS, NSO signals the NEDs to operate in FIPS mode using Bouncy Castle FIPS libraries for Java-based components, ensuring compliance with FIPS 140-3. To support this, NED packages may also require upgrading, as older versions — particularly SSH-based NEDs — often lack the necessary FIPS signaling or Bouncy Castle support required for cryptographic compliance.
-* Configure SSH keys in `ncs.conf` and `init.xml`.
-{% endhint %}
-{% endtab %}
-{% endtabs %}
-
-### Step 5 - Set Environment Variables
-
-The installation program creates a shell script file named `ncsrc` in each NSO installation, which sets the environment variables.
-
-To set the environment variables:
-
-1. Source the `ncsrc` file to get the environment variables settings in your shell. You may want to add this sourcing command to your login sequence, such as `.bashrc`.
-
- For `csh/tcsh` users, there is a `ncsrc.tcsh` file with `csh/tcsh` syntax. The example below assumes that you are using `bash`, other versions of `/bin/sh` may require that you use `.` instead of `source`.
-
- ```bash
- $ source $HOME/ncs-VERSION/ncsrc
- ```
-2. Most users add source `~/nso-x.x/ncsrc` (where `x.x` is the NSO version) to their `~/.bash_profile`, but you can simply do it manually when you want it. Once it has been sourced, you have access to all the NSO executable commands, which start with `ncs`.
-
- ```bash
- ncs {TAB} {TAB}
-
- # Output
- ncs ncs-maapi ncs-project ncs-start-python-vm ncs_cmd
- ncs-backup ncs-make-package ncs-setup ncs-uninstall ncs_conf_tool
- ncs-collect ncs-netsim ncs-start-java-vm ncs_cli
-
- ncs_load
- ncsc
- ncs_crypto_keys-tech-report
- ```
-
-### Step 6 - Create Runtime Directory
-
-NSO needs a deployment/runtime directory where the database files, logs, etc. are stored. An empty default directory can be created using the `ncs-setup` command.
-
-To create a Runtime Directory:
-
-1. Create a Runtime Directory for NSO by running the following command. In this case, we assume that the directory is `$HOME/ncs-run`.
-
- ```bash
- $ ncs-setup --dest $HOME/ncs-run
- ```
-2. Start the NSO daemon `ncs`.
-
- ```bash
- $ cd $HOME/ncs-run
- $ ncs
- ```
-
-
-
-Runtime vs. Installation Directory
-
-A common misunderstanding is that there exists a dependency between the Runtime Directory and the Installation Directory. This is not true. For example, say that you have two NSO local installations `path/to/nso-6.4` and `path/to/nso-6.4.1`. The following sequence runs `nso-6.4` but uses an example and configuration from `nso-6.4.1`.
-
-```bash
- $ cd path/to/nso-6.4
- $ . ncsrc
- $ cd path/to/nso-6.4.1/examples.ncs/service-management/datacenter-qinq
- $ ncs
-```
-
-Since the Runtime Directory is self-contained, this is also the way to move between examples. And since the Runtime Directory is self-contained including the database files, you can compress a complete directory and distribute it. Unpacking that directory and starting NSO from there gives an exact copy of all instance data.
-
-```bash
- $ cd path/to/nso-6.4.1/examples.ncs/service-management/datacenter-qinq
- $ ncs
- $ ncs --stop
- $ cd path/to/nso-6.4.1/examples.ncs/device-management/simulated-cisco-ios
- $ ncs
- $ ncs --stop
-```
-
-
-
-{% hint style="warning" %}
-The `ncs-setup` command creates an `ncs.conf` file that uses predefined encryption keys for easier migration of data across installations. It is not suitable for cases where data confidentiality is required, such as a production deployment. See [Cryptographic Keys](../advanced-topics/cryptographic-keys.md) for ways to generate suitable keys.
-{% endhint %}
-
-### Step 7 - Generate License Registration Token
-
-To conclude the NSO installation, a license registration token must be created using a (CSSM) account. This is because NSO uses Cisco Smart Licensing, as described in the [Cisco Smart Licensing](../management/system-management/cisco-smart-licensing.md) to make it easy to deploy and manage NSO license entitlements. Login credentials to the [Cisco Smart Software Manager](https://www.cisco.com/c/en/us/buy/smart-accounts/software-manager.html) (CSSM) account are provided by your Cisco contact and detailed instructions on how to [create a registration token](../management/system-management/cisco-smart-licensing.md#d5e2927) can be found in the Cisco Smart Licensing. General licensing information covering licensing models, how licensing works, usage compliance, etc., is covered in the [Cisco Software Licensing Guide](https://www.cisco.com/c/en/us/buy/licensing/licensing-guide.html).
-
-To generate a license registration token:
-
-1. When you have a token, start a Cisco CLI towards NSO and enter the token, for example:
-
- ```bash
- $ ncs_cli -Cu admin
- admin@ncs# license smart register idtoken YzIzMDM3MTgtZTRkNC00YjkxLTk2ODQt
- OGEzMTM3OTg5MG
- Registration process in progress.
- Use the 'show license status' command to check the progress and result.
- ```
-
- \
- Upon successful registration, NSO automatically requests a license entitlement for its own instance and for the number of devices it orchestrates and their NED types. If development mode has been enabled, only development entitlement for the NSO instance itself is requested.
-2. Inspect the requested entitlements using the command `show license all` (or by inspecting the NSO daemon log). An example output is shown below.
-
- ```bash
- admin@ncs# show license all
- ...
- 21-Apr-2016::11:29:18.022 miosaterm confd[8226]:
- Smart Licensing Global Notification:
- type = "notifyRegisterSuccess",
- agentID = "sa1",
- enforceMode = "notApplicable",
- allowRestricted = false,
- failReasonCode = "success",
- failMessage = "Successful."
- 21-Apr-2016::11:29:23.029 miosaterm confd[8226]:
- Smart Licensing Entitlement Notification: type = "notifyEnforcementMode",
- agentID = "sa1",
- notificationTime = "Apr 21 11:29:20 2016",
- version = "1.0",
- displayName = "regid.2015-10.com.cisco.NSO-network-element",
- requestedDate = "Apr 21 11:26:19 2016",
- tag = "regid.2015-10.com.cisco.NSO-network-element",
- enforceMode = "inCompliance",
- daysLeft = 90,
- expiryDate = "Jul 20 11:26:19 2016",
- requestedCount = 8
- ...
- ```
-
-
-
-Evaluation Period
-
-If no registration token is provided, NSO enters a 90-day evaluation period, and the remaining evaluation time is recorded hourly in the NSO daemon log:
-
-```
-...
- 13-Apr-2016::13:22:29.178 miosaterm confd[16260]:
- Starting the NCS Smart Licensing Java VM
- 13-Apr-2016::13:22:34.737 miosaterm confd[16260]:
-Smart Licensing evaluation time remaining: 90d 0h 0m 0s
-...
- 13-Apr-2016::13:22:34.737 miosaterm confd[16260]:
- Smart Licensing evaluation time remaining: 89d 23h 0m 0s
-...
-```
-
-
-
-
-
-Communication Send Error
-
-During upgrades, if you experience the 'Communication Send Error' with license registration, restart the Smart Agent.
-
-
-
-
-
-If You are Unable to Access Cisco Smart Software Manager
-
-In a situation where the NSO instance has no direct access to the Cisco Smart Software Manager, one option is the [Cisco Smart Software Manager Satellite](https://software.cisco.com/software/csws/ws/platform/home) which can be installed to manage software licenses on the premises. Install the satellite and use the command `call-home destination address http ` to point to the satellite.
-
-Another option when direct access is not desired is to configure an HTTP or HTTPS proxy, e.g., `smart-license smart-agent proxy url https://127.0.0.1:8080`. If you plan to do this, take the note below regarding ignored CLI configurations into account:
-
-If `ncs.conf` contains a configuration for any of the java-executable, java-options, override-url/url, or proxy/url under the configure path `/ncs-config/smart-license/smart-agent/`, then any corresponding configuration done via the CLI is ignored.
-
-
-
-
-
-License Registration in High Availability (HA) Mode
-
-When configuring NSO in HA mode, the license registration token must be provided to the CLI running on the primary node. Read more about HA and node types in NSO [High Availability](../management/high-availability.md).
-
-
-
-
-
-Licensing Log
-
-Licensing activities are also logged in the NSO daemon log as described in [Monitoring NSO](../management/system-management/#d5e7876). For example, a successful token registration results in the following log entry:
-
-```
- 21-Apr-2016::11:29:18.022 miosaterm confd[8226]:
- Smart Licensing Global Notification:
- type = "notifyRegisterSuccess"
-```
-
-
-
-
-
-Check Registration Status
-
-To check the registration status, use the command `show license status` An example output is shown below.
-
-```bash
-admin@ncs# show license status
-Smart Licensing is ENABLED
-
-Registration:
-Status: REGISTERED
-Smart Account: Network Services Orchestrator
-Virtual Account: Default
-Export-Controlled Functionality: Allowed
-Initial Registration: SUCCEEDED on Apr 21 09:29:11 2016 UTC
-Last Renewal Attempt: SUCCEEDED on Apr 21 09:29:16 2016 UTC
-Next Renewal Attempt: Oct 18 09:29:16 2016 UTC
-Registration Expires: Apr 21 09:26:13 2017 UTC
-Export-Controlled Functionality: Allowed
-
-License Authorization:
-License Authorization:
-Status: IN COMPLIANCE on Apr 21 09:29:18 2016 UTC
-Last Communication Attempt: SUCCEEDED on Apr 21 09:26:30 2016 UTC
-Next Communication Attempt: Apr 21 21:29:32 2016 UTC
-Communication Deadline: Apr 21 09:26:13 2017 UTC
-```
-
-
-
-## Local Install FAQs
-
-Frequently Asked Questions (FAQs) about Local Install.
-
-
-
-Is there a dependency between the NSO Installation Directory and Runtime Directory?
-
-No, there is no such dependency.
-
-
-
-
-
-Do you need to source the ncsrc file before starting NSO?
-
-Yes.
-
-
-
-
-
-Can you start NSO from a directory that is not an NSO runtime directory?
-
-No. To start NSO, you need to point to the run directory.
-
-
-
-
-
-Can you stop NSO from a directory that is not an NSO runtime directory?
-
-Yes.
-
-
-
-
-
-Can we move NSO Installation from one folder to another?
-
-Yes. You can move the directory where you installed NSO to a new location in your directory tree. Simply move NSO's root directory to the new desired location and update this file: `$NCS_DIR/ncsrc` (and `ncsrc.tcsh` if you want). This is a small and handy script that sets up some environment variables for you. Update the paths to the new location. The `$NCS_DIR/bin/ncs` and `$NCS_DIR/bin/ncsc` scripts will determine the location of NSO's root directory automatically.
-
-
-
-***
-
-**Next Steps**
-
-{% content-ref url="post-install-actions/explore-the-installation.md" %}
-[explore-the-installation.md](post-install-actions/explore-the-installation.md)
-{% endcontent-ref %}
diff --git a/administration/installation-and-deployment/post-install-actions/README.md b/administration/installation-and-deployment/post-install-actions/README.md
deleted file mode 100644
index aa3a4d4e..00000000
--- a/administration/installation-and-deployment/post-install-actions/README.md
+++ /dev/null
@@ -1,47 +0,0 @@
----
-description: Perform actions and activities possible after installing NSO.
----
-
-# Post-Install Actions
-
-The following actions are possible after installing NSO.
-
-## After Local Install
-
-{% content-ref url="explore-the-installation.md" %}
-[explore-the-installation.md](explore-the-installation.md)
-{% endcontent-ref %}
-
-{% content-ref url="start-stop-nso.md" %}
-[start-stop-nso.md](start-stop-nso.md)
-{% endcontent-ref %}
-
-{% content-ref url="create-nso-instance.md" %}
-[create-nso-instance.md](create-nso-instance.md)
-{% endcontent-ref %}
-
-{% content-ref url="enable-development-mode.md" %}
-[enable-development-mode.md](enable-development-mode.md)
-{% endcontent-ref %}
-
-{% content-ref url="running-nso-examples.md" %}
-[running-nso-examples.md](running-nso-examples.md)
-{% endcontent-ref %}
-
-{% content-ref url="migrate-to-system-install.md" %}
-[migrate-to-system-install.md](migrate-to-system-install.md)
-{% endcontent-ref %}
-
-{% content-ref url="uninstall-local-install.md" %}
-[uninstall-local-install.md](uninstall-local-install.md)
-{% endcontent-ref %}
-
-## After System Install
-
-{% content-ref url="modify-examples-for-system-install.md" %}
-[modify-examples-for-system-install.md](modify-examples-for-system-install.md)
-{% endcontent-ref %}
-
-{% content-ref url="uninstall-system-install.md" %}
-[uninstall-system-install.md](uninstall-system-install.md)
-{% endcontent-ref %}
diff --git a/administration/installation-and-deployment/post-install-actions/create-nso-instance.md b/administration/installation-and-deployment/post-install-actions/create-nso-instance.md
deleted file mode 100644
index 8c779b3d..00000000
--- a/administration/installation-and-deployment/post-install-actions/create-nso-instance.md
+++ /dev/null
@@ -1,68 +0,0 @@
----
-description: Create a new NSO instance for Local Install.
----
-
-# Create NSO Instance
-
-{% hint style="warning" %}
-Applies to Local Install.
-{% endhint %}
-
-One of the included scripts with an NSO installation is the `ncs-setup`, which makes it very easy to create instances of NSO from a Local Install. You can look at the `--help` or [ncs-setup(1)](../../../resources/man/ncs-setup.1.md) in Manual Pages for more details, but the two options we need to know are:
-
-* `--dest` defines the directory where you want to set up NSO. if the directory does not exist, it will be created.
-* `--package` defines the NEDs that you want to have installed. You can specify this option multiple times.
-
-{% hint style="info" %}
-NCS is the original name of the NSO product. Therefore, many of the commands and application features are prefaced with `ncs`. You can think of NCS as another name for NSO.
-{% endhint %}
-
-To create an NSO instance:
-
-1. Run the command to set up an NSO instance in the current directory with the IOS, NX-OS, IOS-XR and ASA NEDs. You only need one NED per platform that you want NSO to manage, even if you may have multiple versions in your installer `neds` directory.
-
- \
- Use the name of the NED folder in `${NCS_DIR}/packages/neds` for the latest NED version that you have installed for the target platform. Use the tab key to complete the path after you start typing (alternatively, copy and paste). Verify that the NED versions below match what is currently on the sandbox to avoid a syntax error. See the example below.
-
- ```bash
- ncs-setup --package ~/nso-6.0/packages/neds/cisco-ios-cli-6.44 \
- --package ~/nso-6.0/packages/neds/cisco-nx-cli-5.15 \
- --package ~/nso-6.0/packages/neds/cisco-iosxr-cli-7.20 \
- --package ~/nso-6.0/packages/neds/cisco-asa-cli-6.8 \
- --dest nso-instance
- ```
-2. Check the `nso-instance` directory. Notice that several new files and folders are created.
-
- ```bash
- $ ls nso-instance/
- logs ncs-cdb ncs.conf packages README.ncs scripts state
- $ ls -l nso-instance/packages/
- total 0
- lrwxrwxrwx 1 user docker 51 Mar 19 12:44 cisco-asa-cli-6.8 ->
- /home/user/nso-6.0/packages/neds/cisco-asa-cli-6.8
-
- lrwxrwxrwx 1 user docker 52 Mar 19 12:44 cisco-ios-cli-6.44 ->
- /home/user/nso-6.0/packages/neds/cisco-ios-cli-6.44
-
- lrwxrwxrwx 1 user docker 54 Mar 19 12:44 cisco-iosxr-cli-7.20 ->
- /home/user/nso-6.0/packages/neds/cisco-iosxr-cli-7.20
-
- lrwxrwxrwx 1 user docker 51 Mar 19 12:44 cisco-nx-cli-5.15 ->
- /home/user/nso-6.0/packages/neds/cisco-nx-cli-5.15
- $
- ```
-
- Following is a description of the important files and folders:
-
- * `ncs.conf` is the NSO application configuration file and is used to customize aspects of the NSO instance (for example, to change ports, enable/disable features, and so on.) See [ncs.conf(5)](../../../resources/man/ncs.conf.5.md) in Manual Pages for information.
- * `packages/` is the directory that has symlinks to the NEDs that we referenced in the `--package` arguments at the time of setup. See [NSO Packages](../../../development/core-concepts/packages.md) in Development for more information.
- * `logs/` is the directory that contains all the logs from NSO. This directory is useful for troubleshooting.
-3. Start the NSO instance by navigating to the `nso-instance` directory and typing the `ncs` command. You must be situated in the `nso-instance` directory each time you want to start or stop NSO. If you have multiple instances, you need to navigate to each one and use the `ncs` command to start or stop each one.
-4. Verify that NSO is running by using the `ncs --status | grep status` command.
-
- ```bash
- $ ncs --status | grep status
- status: started
- db=running id=31 priority=1 path=/ncs:devices/device/live-status-protocol/device-type
- ```
-5. Add Netsim or lab devices using the command `ncs-netsim -h`.
diff --git a/administration/installation-and-deployment/post-install-actions/enable-development-mode.md b/administration/installation-and-deployment/post-install-actions/enable-development-mode.md
deleted file mode 100644
index d4909f59..00000000
--- a/administration/installation-and-deployment/post-install-actions/enable-development-mode.md
+++ /dev/null
@@ -1,11 +0,0 @@
----
-description: Enable your NSO instance for development purposes.
----
-
-# Enable Development Mode
-
-{% hint style="warning" %}
-Applies to Local Install
-{% endhint %}
-
-If you intend to use your NSO instance for development purposes, enable the development mode using the command `license smart development enable`.
diff --git a/administration/installation-and-deployment/post-install-actions/explore-the-installation.md b/administration/installation-and-deployment/post-install-actions/explore-the-installation.md
deleted file mode 100644
index 11adb134..00000000
--- a/administration/installation-and-deployment/post-install-actions/explore-the-installation.md
+++ /dev/null
@@ -1,165 +0,0 @@
----
-description: Explore NSO contents after finishing the installation.
----
-
-# Explore the Installation
-
-{% hint style="warning" %}
-Applies to Local Install.
-{% endhint %}
-
-Before starting NSO, it is recommended to explore the installation contents.
-
-Navigate to the newly created Installation Directory, for example:
-
-```bash
-cd ~/nso-6.0
-```
-
-## Contents of the Installation Directory
-
-The installation directory includes the following contents:
-
-* [Documentation](explore-the-installation.md#d5e552)
-* [Examples](explore-the-installation.md#d5e560)
-* [Network Element Drivers](explore-the-installation.md#d5e564)
-* [Shell scripts](explore-the-installation.md#d5e604)
-
-### Documentation
-
-Along with the binaries, NSO installs a full set of documentation available in the `doc/` folder in the Installation Directory. There is also an online version of the documentation available from [DevNet](https://developer.cisco.com/docs/nso/nso-fundamentals/).
-
-```bash
-ls -l doc/
-drwxr-xr-x 5 user staff 160B Nov 29 05:19 api/
-drwxr-xr-x 14 user staff 448B Nov 29 05:19 html/
--rw-r--r-- 1 user staff 202B Nov 29 05:19 index.html
-drwxr-xr-x 17 user staff 544B Nov 29 05:19 pdf/
-```
-
-Run `index.html` in your browser to explore further.
-
-### Examples
-
-Local Install comes with a rich set of [examples](https://github.com/NSO-developer/nso-examples/tree/6.6) to start using NSO.
-
-```bash
-$ ls -1 examples.ncs/
-README.md
-aaa
-common
-device-management
-getting-started
-high-availability
-layered-services-architecture
-misc
-nano-services
-northbound-interfaces
-scaling-performance
-sdk-api
-service-management
-```
-
-### Network Element Drivers (NEDs)
-
-In order to communicate with the network, NSO uses NEDs as device drivers for different device types. Cisco has NEDs for hundreds of different devices available for customers, and several are included in the installer in the `/packages/neds` directory.
-
-In the example below, NEDs for Cisco ASA, IOS, IOS XR, and NX-OS are shown. Also included are NEDs for other vendors including Juniper JunOS, A10, ALU, and Dell.
-
-```bash
-$ ls -1 packages/neds
-a10-acos-cli-3.0
-alu-sr-cli-3.4
-cisco-asa-cli-6.6
-cisco-ios-cli-3.0
-cisco-ios-cli-3.8
-cisco-iosxr-cli-3.0
-cisco-iosxr-cli-3.5
-cisco-nx-cli-3.0
-dell-ftos-cli-3.0
-juniper-junos-nc-3.0
-```
-
-{% hint style="info" %}
-The example NEDs included in the installer are intended for evaluation, demonstration, and use with the [examples.ncs](https://github.com/NSO-developer/nso-examples/tree/6.6) examples. These are not the latest versions available and often do not have all the features available in production NEDs.
-{% endhint %}
-
-#### **Install New NEDs**
-
-A large number of pre-built supported NEDs are available which can be acquired and downloaded by the customers from [Cisco Software Download](https://software.cisco.com/). Note that the specific file names and versions that you download may be different from the ones in this guide. Therefore, remember to update the paths accordingly.
-
-Like the NSO installer, the NEDs are `signed.bin` files that need to be run to validate the download and extract the new code.
-
-To install new NEDs:
-
-1. Change to the working directory where your downloads are. The filenames indicate which version of NSO the NEDs are pre-compiled for (in this case NSO 6.0), and the version of the NED. An example output is shown below.
-
- ```bash
- cd ~/Downloads/
- ls -l ncs*.bin
-
- # Output
- -rw-r--r--@ 1 user staff 9708091 Dec 18 12:05 ncs-6.0-cisco-asa-6.7.7.signed.bin
- -rw-r--r--@ 1 user staff 51233042 Dec 18 12:06 ncs-6.0-cisco-ios-6.42.1.signed.bin
- -rw-r--r--@ 1 user staff 8400190 Dec 18 12:05 ncs-6.0-cisco-nx-5.13.1.1.signed.bin
- ```
-2. Use the `sh` command to run `signed.bin` to verify the certificate and extract the NED tar.gz and other files. Repeat for all files. An example output is shown below.
-
- ```bash
- sh ncs-6.0-cisco-nx-5.13.1.1.signed.bin
-
- Unpacking...
- Verifying signature...
- Downloading CA certificate from http://www.cisco.com/security/pki/certs/crcam2.cer ...
- Successfully downloaded and verified crcam2.cer.
- Downloading SubCA certificate from http://www.cisco.com/security/pki/certs/innerspace.cer ...
- Successfully downloaded and verified innerspace.cer.
- Successfully verified root, subca and end-entity certificate chain.
- Successfully fetched a public key from tailf.cer.
- Successfully verified the signature of ncs-6.0-cisco-nx-5.13.1.1.tar.gz using tailf.cer
- ```
-3. You now have three tar (.`tar.gz`) files. These are compressed versions of the NEDs. List the files to verify as shown in the example below.
-
- ```bash
- ls -l ncs*.tar.gz
- -rw-r--r-- 1 user staff 9704896 Dec 12 21:11 ncs-6.0-cisco-asa-6.7.7.tar.gz
- -rw-r--r-- 1 user staff 51260488 Dec 13 22:58 ncs-6.0-cisco-ios-6.42.1.tar.gz
- -rw-r--r-- 1 user staff 8409288 Dec 18 09:09 ncs-6.0-cisco-nx-5.13.1.1.tar.gz
- ```
-4. Navigate to the `packages/neds` directory for your Local Install, for example:
-
- ```bash
- cd ~/nso-6.0/packages/neds
- ```
-5. In the `/packages/neds` directory, extract the .tar files into this directory using the `tar` command with the path to where the compressed NED is located. An example is shown below.
-
- ```
- tar -zxvf ~/Downloads/ncs-6.0-cisco-nx-5.13.1.1.tar.gz
- tar -zxvf ~/Downloads/ncs-6.0-cisco-ios-6.42.1.tar.gz
- tar -zxvf ~/Downloads/ncs-6.0-cisco-asa-6.7.7.tar.gz
- ```
-
- \
- Here is a sample list of the newer NEDs extracted along with the ones bundled with the installation:
-
- ```
- drwxr-xr-x 13 user staff 416 Nov 29 05:17 a10-acos-cli-3.0
- drwxr-xr-x 12 user staff 384 Nov 29 05:17 alu-sr-cli-3.4
- drwxr-xr-x 13 user staff 416 Nov 29 05:17 cisco-asa-cli-6.6
- drwxr-xr-x 13 user staff 416 Dec 12 21:11 cisco-asa-cli-6.7
- drwxr-xr-x 12 user staff 384 Nov 29 05:17 cisco-ios-cli-3.0
- drwxr-xr-x 12 user staff 384 Nov 29 05:17 cisco-ios-cli-3.8
- drwxr-xr-x 13 user staff 416 Dec 13 22:58 cisco-ios-cli-6.42
- drwxr-xr-x 13 user staff 416 Nov 29 05:17 cisco-iosxr-cli-3.0
- drwxr-xr-x 13 user staff 416 Nov 29 05:17 cisco-iosxr-cli-3.5
- drwxr-xr-x 13 user staff 416 Nov 29 05:17 cisco-nx-cli-3.0
- drwxr-xr-x 14 user staff 448 Dec 18 09:09 cisco-nx-cli-5.13
- drwxr-xr-x 13 user staff 416 Nov 29 05:17 dell-ftos-cli-3.0
- drwxr-xr-x 10 user staff 320 Nov 29 05:17 juniper-junos-nc-3.0
- ```
-
-### Shell Scripts
-
-The last thing to note is the files `ncsrc` and `ncsrc.tsch`. These are shell scripts for `bash` and `tsch` that set up your PATH and other environment variables for NSO. Depending on your shell, you need to source this file before starting NSO.
-
-For more information on sourcing shell script, see the [Local Install steps](../local-install.md).
diff --git a/administration/installation-and-deployment/post-install-actions/migrate-to-system-install.md b/administration/installation-and-deployment/post-install-actions/migrate-to-system-install.md
deleted file mode 100644
index 55522933..00000000
--- a/administration/installation-and-deployment/post-install-actions/migrate-to-system-install.md
+++ /dev/null
@@ -1,125 +0,0 @@
----
-description: Convert your current Local Install setup to a System Install.
----
-
-# Migrate to System Install
-
-{% hint style="warning" %}
-Applies to Local Install.
-{% endhint %}
-
-If you already have a Local Install with existing data that you would like to convert into a System Install, the following procedure allows you to do so. However, a reverse migration from System to Local Install is not supported.
-
-{% hint style="info" %}
-It is possible to perform the migration and upgrade simultaneously to a newer NSO version, however, doing so introduces additional complexity. If you run into issues, first migrate, and then perform the upgrade.
-{% endhint %}
-
-The following procedure assumes that NSO is installed as described in the NSO Local Install process and will perform an initial System Install of the same NSO version. After following these steps, consult the NSO System Install guide for additional steps that are required for a fully functional System Install.
-
-The procedure also assumes you are using the `$HOME/ncs-run` folder as the run directory. If this is not the case, modify the following path accordingly.
-
-To migrate to System Install:
-
-1. Stop the current (local) NSO instance if it is running.
-
- ```bash
- $ ncs --stop
- ```
-2. Take a complete backup of the Runtime Directory for potential disaster recovery.
-
- ```bash
- $ tar -czf $HOME/ncs-backup.tar.gz -C $HOME ncs-run
- ```
-3. Change to Super User privileges.
-
- ```bash
- $ sudo -s
- ```
-4. Start the NSO System Install.
-
- ```bash
- $ sh nso-VERSION.OS.ARCH.installer.bin --system-install
- ```
-5. If you have multiple versions of NSO installed, verify that the symbolic link in `/opt/ncs` points to the correct version.
-6. Copy the CDB files containing data to the central location.
-
- ```bash
- # cp $HOME/ncs-run/ncs-cdb/*.cdb /var/opt/ncs/cdb
- ```
-7. Ensure that the `/var/opt/ncs/packages` directory includes all the necessary packages, appropriate for the NSO version. However, copying the packages directly could later on interfere with the operation of the `nct` command. It is better to only use symbolic links in that folder. Instead, copy the existing packages to the `/opt/ncs/packages` directory, either as directories or as tarball files. Make sure that each package includes the NSO version in its name and is not just a symlink, for example:
-
- ```bash
- # cd $HOME/ncs-run/packages
- # for pkg in *; do cp -RL $pkg /opt/ncs/packages/ncs-VERSION-$pkg; done
- ```
-8. Link to these packages in the `/var/opt/ncs/packages` directory.
-
- ```bash
- # cd /var/opt/ncs/packages/
- # rm -f *
- # for pkg in /opt/ncs/packages/ncs-VERSION-*; do ln -s $pkg; done
- ```
-
- \
- The reason for prepending `ncs-VERSION` to the filename is to allow additional NSO commands, such as `nct upgrade` and `software packages` to work properly. These commands need to identify which NSO version a package was compiled for.
-9. Edit the `/etc/ncs/ncs.conf` configuration file and make the necessary changes. If you wish to use the configuration from Local Install, disable the local authentication, unless you fully understand its security implications.
-
- ```xml
-
- false
-
- ```
-10. When starting NSO at boot using `systemd`, make sure that you set the package reload option from the `/etc/ncs/ncs.systemd.conf` environment file to `true`. Or, for example, set `NCS_RELOAD_PACKAGES=true` before starting NSO if using the `ncs` command.
-
- ```bash
- # systemctl daemon-reload
- # systemctl start ncs
- ```
-11. Review and complete the steps in NSO System Install, except running the installer, which you have done already. Once completed, you should have a running NSO instance with data from the Local Install.
-12. Remove the package reload option if it was set.
-
- ```bash
- # unset NCS_RELOAD_PACKAGES
- ```
-13. Update log file paths for Java and Python VM through the NSO CLI.
-
- ```bash
- $ ncs_cli -C -u admin
- admin@ncs# config
- Entering configuration mode terminal
- admin@ncs(config)# unhide debug
- admin@ncs(config)# show full-configuration java-vm stdout-capture file
- java-vm stdout-capture file ./logs/ncs-java-vm.log
- admin@ncs(config)# java-vm stdout-capture file /var/log/ncs/ncs-java-vm.log
- admin@ncs(config)# commit
- Commit complete.
- admin@ncs(config)# show full-configuration java-vm stdout-capture file
- java-vm stdout-capture file /var/log/ncs/ncs-java-vm.log
- admin@ncs(config)# show full-configuration python-vm logging log-file-prefix
- python-vm logging log-file-prefix ./logs/ncs-python-vm
- admin@ncs(config)# python-vm logging log-file-prefix /var/log/ncs/ncs-python-vm
- admin@ncs(config)# commit
- Commit complete.
- admin@ncs(config)# show full-configuration python-vm logging log-file-prefix
- python-vm logging log-file-prefix /var/log/ncs/ncs-python-vm
- admin@ncs(config)# exit
- admin@ncs#
- admin@ncs# exit
- ```
-14. Verify that everything is working correctly.
-
-At this point, you should have a complete copy of the previous Local Install running as a System Install. Should the migration fail at some point and you want to back out of it, the Local Install was not changed and you can easily go back to using it as before.
-
-```bash
-$ sudo systemctl stop ncs
-$ source $HOME/ncs-VERSION/ncsrc
-$ cd $HOME/ncs-run
-$ ncs
-```
-
-In the unlikely event of Local Install becoming corrupted, you can restore it from the backup.
-
-```bash
-$ rm -rf $HOME/ncs-run
-$ tar -xzf $HOME/ncs-backup.tar.gz -C $HOME
-```
diff --git a/administration/installation-and-deployment/post-install-actions/modify-examples-for-system-install.md b/administration/installation-and-deployment/post-install-actions/modify-examples-for-system-install.md
deleted file mode 100644
index 46efe5da..00000000
--- a/administration/installation-and-deployment/post-install-actions/modify-examples-for-system-install.md
+++ /dev/null
@@ -1,32 +0,0 @@
----
-description: Alter your examples to work with System Install.
----
-
-# Modify Examples for System Install
-
-{% hint style="warning" %}
-Applies to System Install.
-{% endhint %}
-
-Since all the NSO examples and README steps that come with the installer are primarily aimed at Local Install, you need to modify them to run them on a System Install.
-
-To work with the System Install structure, this may require a little or bigger modification depending on the example.
-
-For example, to port the [example.ncs/nano-services/basic-vrouter](https://github.com/NSO-developer/nso-examples/tree/6.6/nano-services/basic-vrouter) example to the System Install structure:
-
-1. Make the following changes to the `basic-vrouter/ncs.conf` file:
-
- ```xml
- false
- 0.0.0.0
- 8888
- -${NCS_DIR}/etc/ncs/ssl/cert/host.key
- -${NCS_DIR}/etc/ncs/ssl/cert/host.cert
- +${NCS_CONFIG_DIR}/etc/ncs/ssl/cert/host.key
- +${NCS_CONFIG_DIR}/etc/ncs/ssl/cert/host.cert
-
-
- ```
-2. Copy the Local Install `$NCS_DIR/var/ncs/cdb/aaa_init.xml` file to the `basic-vrouter/` folder.
-
-Other, more complex examples may require more `ncs.conf` file changes or require a copy of the Local Install default `$NCS_DIR/etc/ncs/ncs.conf` file together with the modification described above, or require the Local Install tool `$NCS_DIR/bin/ncs-setup` to be installed, as the `ncs-setup` command is usually not useful with a System Install. See [Migrate to System Install](migrate-to-system-install.md) for more information.
diff --git a/administration/installation-and-deployment/post-install-actions/running-nso-examples.md b/administration/installation-and-deployment/post-install-actions/running-nso-examples.md
deleted file mode 100644
index ee16338c..00000000
--- a/administration/installation-and-deployment/post-install-actions/running-nso-examples.md
+++ /dev/null
@@ -1,145 +0,0 @@
----
-description: Run and interact with practice examples provided with the NSO installer.
----
-
-# Running NSO Examples
-
-{% hint style="warning" %}
-Applies to Local Install.
-{% endhint %}
-
-This section provides an overview of how to run the examples provided with the NSO installer. By working through the examples, the reader should get a good overview of the various aspects of NSO and hands-on experience from interacting with it.
-
-{% hint style="info" %}
-This section references the examples located in [$NCS\_DIR/examples.ncs](https://github.com/NSO-developer/nso-examples/tree/6.6). The examples all have `README` files that include instructions related to the example.
-{% endhint %}
-
-## General Instructions
-
-1. Make sure that NSO is installed with a Local Install according to the instructions in [Local Install](../local-install.md).
-2. Source the `ncsrc` file in the NSO installation directory to set up a local environment. For example:
-
- ```bash
- $ source ~/nso-6.0/ncsrc
- ```
-3. Proceed to the example directory:
-
- ```bash
- $ cd $NCS_DIR/examples.ncs/device-management/simulated-cisco-ios
- ```
-4. Follow the instructions in the `README` files that are located in the example directories.
-
-Every example directory is a complete NSO run-time directory. The README file and the detailed instructions later in this guide show how to generate a simulated network and NSO configuration for running the specific examples. Basically, the following steps are done:
-
-1. Create a simulated network using the `ncs-netsim --create-network` command:
-
- ```bash
- $ ncs-netsim create-network cisco-ios-cli-3.8 3 ios
- ```
-
- This creates 3 Cisco IOS devices called `ios0`, `ios1`, and `ios2`.
-2. Create an NSO run-time environment using the `ncs-setup` command:
-
- ```bash
- $ ncs-setup --dest .
- ```
-
- This command uses the `--dest` option to create local directories for logs, database files, and the NSO configuration file to the current directory (note that `.` refers to the current directory).
-3. Start NCS netsim:
-
- ```bash
- $ ncs-netsim start
- ```
-4. Start NSO:
-
- ```bash
- $ ncs
- ```
-
-{% hint style="warning" %}
-It is important to make sure that you stop `ncs` and `ncs-netsim` when moving between examples using the `stop` option of the `netsim` and the `--stop` option of the `ncs`.
-
-```bash
-$ cd $NCS_DIR/examples.ncs/device-management/simulated-cisco-ios
-$ ncs-netsim start
-$ ncs
-$ ncs-netsim stop
-$ ncs --stop
-```
-{% endhint %}
-
-## Common Mistakes
-
-Some of the most common mistakes are:
-
-
-
-Not Sourcing the ncsrc File
-
-You have not sourced the `ncsrc` file:
-
-```bash
-$ ncs
--bash: ncs: command not found
-```
-
-
-
-
-
-Not Starting NSO from the Runtime Directory
-
-You are trying to start NSO from a directory that is not set up as a runtime directory.
-
-```bash
-$ ncs
-Bad configuration: /etc/ncs/ncs.conf:0: "./state/packages-in-use: \
- Failed to create symlink: no such file or directory"
-Daemon died status=21
-```
-
-What happened above is that NSO did not find an `ncs.conf` in the local directory, so it uses the default in `/etc/ncs/ncs.conf`. That `ncs.conf` says there shall be directories at `./` such as `./state` which is not true. Make sure that you `cd` to the root of the example and check that there is a `ncs.conf` file and a `cdb-dir` directory.
-
-
-
-
-
-Having Another Instance of NSO Running
-
-You already have another instance of NSO running (or the same with netsim):
-
-```bash
-$ ncs
-Cannot bind to internal socket 127.0.0.1:4569 : address already in use
-Daemon died status=20
-$ ncs-netsim start
-DEVICE c0 Cannot bind to internal socket 127.0.0.1:5010 : \
- address already in use
-Daemon died status=20
-FAIL
-```
-
-To resolve the above, just stop the running instance of NSO and netsim. Remember that it does not matter where you started the "running" NSO and netsim; there is no need to `cd` back to the other example before stopping.
-
-
-
-
-
-Not Having the NetSim Device Configuration Loaded into NSO
-
-Another problem that users run into sometimes is where the NetSim device configuration is not loaded into NSO. This can happen if the order of commands is not followed. To resolve this, remove the database files in the `ncs_cdb` directory (keep any files with the `.xml` extension). In this way, NSO will reload the XML initialization files provided by **ncs-setup**.
-
-```bash
-$ ncs --stop
-$ cd ncs-cdb/
-$ ls
-A.cdb
-C.cdb
-O.cdb
-S.cdb
-netsim_devices_init.xml
-$ rm *.cdb
-$ ncs
-```
-
-
diff --git a/administration/installation-and-deployment/post-install-actions/start-stop-nso.md b/administration/installation-and-deployment/post-install-actions/start-stop-nso.md
deleted file mode 100644
index 93030e72..00000000
--- a/administration/installation-and-deployment/post-install-actions/start-stop-nso.md
+++ /dev/null
@@ -1,20 +0,0 @@
----
-description: Start and stop the NSO daemon.
----
-
-# Start and Stop NSO
-
-{% hint style="warning" %}
-Applies to Local Install.
-{% endhint %}
-
-The command `ncs -h` shows various options when starting NSO. By default, NSO starts in the background without an associated terminal. It is recommended to add NSO to the `/etc/init` scripts of the deployment hosts. For more information, see the [ncs(1)](../../../resources/man/ncs.1.md) in Manual Pages.
-
-Whenever you start (or reload) the NSO daemon, it reads its configuration from `./ncs.conf` or `${NCS_DIR}/etc/ncs/ncs.conf` or from the file specified with the `-c` option. Parts of the configuration can also be placed in the `ncs.conf.d` directory that must be placed next to the actual `ncs.conf` file.
-
-```bash
-$ ncs
-$ ncs --stop
-$ ncs -h
-...
-```
diff --git a/administration/installation-and-deployment/post-install-actions/uninstall-local-install.md b/administration/installation-and-deployment/post-install-actions/uninstall-local-install.md
deleted file mode 100644
index d0287273..00000000
--- a/administration/installation-and-deployment/post-install-actions/uninstall-local-install.md
+++ /dev/null
@@ -1,11 +0,0 @@
----
-description: Remove Local Install.
----
-
-# Uninstall Local Install
-
-{% hint style="warning" %}
-Applies to Local Install.
-{% endhint %}
-
-To uninstall Local Install, simply delete the Install Directory.
diff --git a/administration/installation-and-deployment/post-install-actions/uninstall-system-install.md b/administration/installation-and-deployment/post-install-actions/uninstall-system-install.md
deleted file mode 100644
index f5a319f4..00000000
--- a/administration/installation-and-deployment/post-install-actions/uninstall-system-install.md
+++ /dev/null
@@ -1,23 +0,0 @@
----
-description: Remove System Install.
----
-
-# Uninstall System Install
-
-{% hint style="warning" %}
-Applies to System Install.
-{% endhint %}
-
-NSO can be uninstalled using the [ncs-installer(1)](../../../resources/man/ncs-installer.1.md) option only if NSO is installed with `--system-install` option. Either part of the static files or the full installation can be removed using `ncs-uninstall` option. Ensure to stop NSO before uninstalling.
-
-```bash
-# ncs-uninstall --all
-```
-
-Executing the above command removes the Installation Directory `/opt/ncs` including symbolic links, Configuration Directory `/etc/ncs`, Run Directory `/var/opt/ncs`, Log Directory `/var/log/ncs`, `systemd` service file `/etc/systemd/system/ncs.service`, `systemd`environment file `/etc/ncs/ncs.systemd.conf`, and the user profile scripts from `/etc/profile.d`.
-
-To make sure that no license entitlements are consumed after you have uninstalled NSO, be sure to perform the `deregister` command in the CLI:
-
-```cli
-admin@ncs# license smart deregister
-```
diff --git a/administration/installation-and-deployment/system-install.md b/administration/installation-and-deployment/system-install.md
deleted file mode 100644
index f0d5a5ea..00000000
--- a/administration/installation-and-deployment/system-install.md
+++ /dev/null
@@ -1,816 +0,0 @@
----
-description: Install NSO for production use in a system-wide deployment.
----
-
-# System Install
-
-## Installation Steps
-
-Complete the following activities in the given order to perform a System Install of NSO.
-
-
-
-{% hint style="info" %}
-**Mode of Install**
-
-NSO System Install can be installed in **standard mode** or in [**FIPS**](https://www.nist.gov/itl/publications-0/federal-information-processing-standards-fips)**-compliant mode**. Standard mode install supports a broader set of cryptographic algorithms, while the FIPS mode install restricts NSO to use only FIPS 140-3-validated cryptographic modules and algorithms for enhanced/regulated security and compliance. Use FIPS mode only in environments that require compliance with specific security standards, especially in U.S. federal agencies or regulated industries. For all other use cases, install NSO in standard mode.
-
-\* FIPS: Federal Information Processing Standards
-{% endhint %}
-
-### Step 1 - Fulfill System Requirements
-
-Start by setting up your system to install and run NSO.
-
-To install NSO:
-
-1. Fulfill at least the primary requirements.
-2. If you intend to build and run NSO deployment examples, you also need to install additional applications listed under Additional Requirements.
-
-{% hint style="warning" %}
-Where requirements list a specific or higher version, there always exists a (small) possibility that a higher version introduces breaking changes. If in doubt whether the higher version is fully backwards compatible, always use the specific version.
-{% endhint %}
-
-
-
-Primary Requirements
-
-Primary requirements to do a System Install include:
-
-* A system running Linux or macOS on either the `x86_64` or `ARM64` architecture for development. Linux for production. For [FIPS](https://www.nist.gov/itl/publications-0/federal-information-processing-standards-fips) mode, OS FIPS compliance may be required depending on your specific requirements.
-* GNU libc 2.24 or higher.
-* Java JRE 17 or higher. Used by Cisco Smart Licensing.
-* Required and included with many Linux/macOS distributions:
- * `tar` command. Unpack the installer.
- * `gzip` command. Unpack the installer.
- * `ssh-keygen` command. Generate SSH host key.
- * `openssl` command. Generate self-signed certificates for HTTPS.
- * `find` command. Used to find out if all required libraries are available.
- * `which` command. Used by the NSO package manager.
- * `libpam.so.0`. Pluggable Authentication Module library.
- * `libexpat.so.1`. EXtensible Markup Language parsing library.
- * `libz.so.1` version 1.2.7.1 or higher. Data compression library.
-
-
-
-
-
-Additional Requirements
-
-Additional requirements to, for example, build and run NSO production deployment examples include:
-
-* Java JDK 17 or higher.
-* Ant 1.9.8 or higher.
-* Python 3.10 or higher.
-* Python Setuptools is required to build the Python API.
-* Often installed using the Python package installer pip:
- * Python Paramiko 2.2 or higher. To use netconf-console.
- * Python requests. Used by the RESTCONF demo scripts.
-* `xsltproc` command. Used by the `support/ned-make-package-meta-data` command to generate the `package-meta-data.xml` file.
-* One of the following web browsers is required for NSO GUI capabilities. The version must be supported by the vendor at the time of release.
- * Safari
- * Mozilla Firefox
- * Microsoft Edge
- * Google Chrome
-* OpenSSH client applications. For example, `ssh` and `scp` commands.
-* cron. Run time-based tasks, such as `logrotate`.
-* `logrotate`. rotate, compress, and mail NSO and system logs.
-* `rsyslog`. pass NSO logs to a local syslog managed by `rsyslogd` and pass logs to a remote node.
-* `systemd` or `init.d` scripts to start and stop NSO.
-
-
-
-
-
-FIPS Mode Entropy Requirements
-
-The following applies if you are running a container-based setup of your FIPS install:
-
-In containerized environments (e.g., Docker) that run on older Linux kernels (e.g., Ubuntu 18.04), `/dev/random` may block if the system’s entropy pool is low. This can lead to delays or hangs in FIPS mode, as cryptographic operations require high-quality randomness.
-
-To avoid this:
-
-* Prefer newer kernels (e.g., Ubuntu 22.04 or later), where entropy handling is improved to mitigate the issue.
-* Or, install an entropy daemon like Haveged on the Docker host to help maintain sufficient entropy.
-
-Check available entropy on the host system with:
-
-```bash
-cat /proc/sys/kernel/random/entropy_avail
-```
-
-A value of 256 or higher is generally considered safe. Reference: [Oracle blog post](https://blogs.oracle.com/linux/post/entropyavail-256-is-good-enough-for-everyone).
-
-
-
-### Step 2 - Download the Installer and NEDs
-
-To download the Cisco NSO installer and example NEDs:
-
-1. Go to the Cisco's official [Software Download](https://software.cisco.com/download/home) site.
-2. Search for the product "Network Services Orchestrator" and select the desired version.
-3. There are two versions of the NSO installer, i.e. for macOS and Linux systems. For System Install, choose the Linux OS version.
-
-
-
-Identifying the Installer
-
-You need to know your system specifications (Operating System and CPU architecture) to choose the appropriate NSO installer.
-
-NSO is delivered as an OS/CPU-specific signed self-extractable archive. The signed archive file has the pattern `nso-VERSION.OS.ARCH.signed.bin` that after signature verification extracts the `nso-VERSION.OS.ARCH.installer.bin` archive file, where:
-
-* `VERSION` is the NSO version to install.
-* `OS` is the Operating System (`linux` for all Linux distributions and `darwin` for macOS).
-* `ARCH` is the CPU architecture, for example`x86_64`.
-
-
-
-### Step 3 - Unpack the Installer
-
-If your downloaded file is a `signed.bin` file, it means that it has been digitally signed by Cisco, and upon execution, you will verify the signature and unpack the `installer.bin`.
-
-If you only have `installer.bin`, skip to the next step.
-
-To unpack the installer:
-
-1. In the terminal, list the binaries in the directory where you downloaded the installer, for example:
-
- ```bash
- cd ~/Downloads
- ls -l nso*.bin
- -rw-r--r--@ 1 user staff 199M Dec 15 11:45 nso-6.0.linux.x86_64.installer.bin
- -rw-r--r--@ 1 user staff 199M Dec 15 11:45 nso-6.0.linux.x86_64.signed.bin
- ```
-2. Use the `sh` command to run the `signed.bin` to verify the certificate and extract the installer binary and other files. An example output is shown below.
-
- ```bash
- sh nso-6.0.linux.x86_64.signed.bin
- # Output
- Unpacking...
- Verifying signature...
- Downloading CA certificate from http://www.cisco.com/security/pki/certs/crcam2.cer ...
- Successfully downloaded and verified crcam2.cer.
- Downloading SubCA certificate from http://www.cisco.com/security/pki/certs/innerspace.cer ...
- Successfully downloaded and verified innerspace.cer.
- Successfully verified root, subca and end-entity certificate chain.
- Successfully fetched a public key from tailf.cer.
- Successfully verified the signature of nso-6.0.linux.x86_64.installer.bin using tailf.cer
- ```
-3. List the files to check if extraction was successful.
-
- ```bash
- ls -l
- # Output
- -rw-r--r-- 1 user staff 1.8K Nov 29 06:05 README.signature
- -rw-r--r-- 1 user staff 12K Nov 29 06:05 cisco_x509_verify_release.py
- -rwxr-xr-x 1 user staff 199M Nov 29 05:55 nso-6.0.linux.x86_64.installer.bin
- -rw-r--r-- 1 user staff 256B Nov 29 06:05 nso-6.0.linux.x86_64.installer.bin.signature
- -rwxr-xr-x@ 1 user staff 199M Dec 15 11:45 nso-6.0.linux.x86_64.signed.bin
- -rw-r--r-- 1 user staff 1.4K Nov 29 06:05 tailf.cer
- ```
-
-{% hint style="info" %}
-There may also be additional files present.
-{% endhint %}
-
-
-
-Description of Unpacked Files
-
-The following contents are unpacked:
-
-* `nso-VERSION.OS.ARCH.installer.bin`: The NSO installer.
-* `nso-VERSION.OS.ARCH.installer.bin.signature`: Signature generated for the NSO image.
-* `tailf.cer`: An enclosed Cisco-signed x.509 end-entity certificate containing the public key that is used to verify the signature.
-* `README.signature`: File with further details on the unpacked content and steps on how to run the signature verification program. To manually verify the signature, refer to the steps in this file.
-* `cisco_x509_verify_release.py`: Python program that can be used to verify the 3-tier x.509 certificate chain and signature.
-* Multiple `.tar.gz` files: Bundled packages, extending the base NSO functionality.
-* Multiple `.tar.gz.signature` files: Digital signatures for the bundled packages.
-
-Since NSO version 6.3, a few additional NSO packages are included. They contain the following platform tools:
-
-* HCC
-* Observability Exporter
-* Phased Provisioning
-* Resource Manager
-
-For platform tools documentation, refer to the individual package's `README` file or to the [online documentation](https://nso-docs.cisco.com/resources).
-
-**NED Packages**
-
-The NED packages that are available with the NSO installation are netsim-based example NEDs. These NEDs are used for NSO examples only.
-
-Fetch the latest production-grade NEDs from [Cisco Software Download](https://software.cisco.com/download/home) using the URLs provided on your NED license certificates.
-
-**Manual Pages**
-
-The installation program will unpack the NSO manual pages from the documentation archive, allowing you to use the `man` command to view them. The Manual Pages are also available in PDF format and from the online documentation located on [NCS man-pages, Volume 1](../../resources/man/ncs-installer.1.md) in Manual Pages.
-
-Following is a list of a few of the installed manual pages:
-
-* `ncs(1)`: Command to start and control the NSO daemon.
-* `ncsc(1)`: NSO Yang compiler.
-* `ncs_cli(1)`: Frontend to the NSO CLI engine.
-* `ncs-netsim(1)`: Command to create and manipulate a simulated network.
-* `ncs-setup(1)`: Command to create an initial NSO setup.
-* `ncs.conf`: NSO daemon configuration file format.
-
-For example, to view the manual page describing the NSO configuration file, you should type:
-
-```bash
-$ man ncs.conf
-```
-
-Apart from the manual pages, extensive information about command line options can be obtained by running `ncs` and `ncsc` with the `--help` (abbreviated `-h`) flag.
-
-```bash
-$ ncs --help
-```
-
-```bash
-$ ncsc --help
-```
-
-**Installer Help**
-
-Run the `sh nso-VERSION.linux.x86_64.installer.bin --help` command to view additional help on running binaries. More details can be found in the [ncs-installer(1)](../../resources/man/ncs-installer.1.md) Manual Page included with NSO.
-
-Notice the two options for `--local-install` or `--system-install`.
-
-```bash
-sh nso-6.0.linux.x86_64.installer.bin --help
-```
-
-
-
-### Step 4 - Run the Installer
-
-To run the installer:
-
-1. Navigate to your Install Directory.
-2. Run the installer with the `--system-install` option to perform System Install. This option creates an install of NSO that is suitable for production deployment. At this point, you can choose to install NSO in standard mode or in FIPS mode.
-
-{% tabs %}
-{% tab title="Standard System Install" %}
-The standard mode is the regular NSO install and is suitable for most installations. FIPS is disabled in this mode.
-
-For standard NSO install, run the installer as below.
-
-```bash
-$ sudo sh nso-VERSION.OS.ARCH.installer.bin --system-install
-```
-
-{% code title="Example: Standard System Install" %}
-```bash
-$ sudo sh nso-6.0.linux.x86_64.installer.bin --system-install
-```
-{% endcode %}
-{% endtab %}
-
-{% tab title="FIPS System Install" %}
-FIPS mode creates a FIPS-compliant NSO install.
-
-FIPS mode should only be used for deployments that are subject to strict compliance regulations as the cryptographic functions are then confined to the CiscoSSL FIPS 140-3 module library.
-
-For FIPS-compliant NSO install, run the command with the additional `--fips-install` flag. Afterwards, verify FIPS in `ncs.conf`.
-
-```bash
-$ sudo sh nso-VERSION.OS.ARCH.installer.bin --system-install --fips-install
-```
-
-{% code title="Example: FIPS System Install" %}
-```bash
-$ sudo sh nso-6.5.linux.x86_64.installer.bin --system-install --fips-install
-```
-{% endcode %}
-
-{% hint style="info" %}
-**NSO Configuration for FIPS**
-
-Note the following as part of FIPS-specific configuration/install:
-
-1. The `ncs.conf` file is automatically configured to enable FIPS by setting the following flag:
-
-```xml
-
- true
-
-```
-
-2. Additional environment variables (`NCS_OPENSSL_CONF_INCLUDE`, `NCS_OPENSSL_CONF`, `NCS_OPENSSL_MODULES`) are configured in `ncsrc` for FIPS compliance.
-3. The default `crypto.so` is overwritten at install for FIPS compliance.
-
-Additionally, note that:
-
-* As certain algorithms typically available with CiscoSSL are not included in the FIPS 140-3 validated module (and therefore disabled in FIPS mode), you need to configure NSO to use only the algorithms and cryptographic suites available through the CiscoSSL FIPS 140-3 object module.
-* With FIPS, NSO signals the NEDs to operate in FIPS mode using Bouncy Castle FIPS libraries for Java-based components, ensuring compliance with FIPS 140-3. To support this, NED packages may also require upgrading, as older versions — particularly SSH-based NEDs — often lack the necessary FIPS signaling or Bouncy Castle support required for cryptographic compliance.
-* Configure SSH keys in `ncs.conf` and `init.xml`.
-{% endhint %}
-{% endtab %}
-{% endtabs %}
-
-
-
-Default Directories and Scripts
-
-The System Install by default creates the following directories:
-
-* The Installation Directory is created in `/opt/ncs`, where the distribution is available.
-* The Configuration Directory is created in `/etc/ncs`, where the `ncs.conf` file, SSH keys, and WebUI certificates are created.
-* The Running Directory is created in `/var/opt/ncs`, where runtime state files, CDB database, and packages are created.
-* The Log Directory is created in `/var/log/ncs`, where the log files are populated.
-* System-wide environment variables are created in `/etc/profile.d/ncs.sh`.
-* The installer creates a `systemd` system service script in `/etc/systemd/system/ncs.service` and enables the NSO service to start at boot, but the service is _not_ started immediately. See the steps below for starting NSO after installation and before rebooting.
-* To allow package reload when starting NSO, an environment file called `/etc/ncs/ncs.systemd.conf` is created. This file is owned by the user that starts NSO.
-
-For the `--system-install` option, you can also choose a user-defined (non-default) Installation Directory, Configuration Directory, Running Directory, and Log Directory with `--install-dir`, `--config-dir`, `--run-dir` and `--log-dir` parameters, and specify that NSO should run as a different user than root with the `--run-as-user` parameter.
-
-If you choose a non-default Installation Directory by using `--install-dir`, you need to specify `--install-dir` for subsequent installs and also for backup and restore.
-
-Use the `--ignore-init-scripts` option to disable provisioning the `systemd` system service.
-
-If a legacy SysV service exists in `/etc/init.d/ncs` when installing in interactive mode, the user will be prompted to continue using the old SysV service behavior or prepare a `systemd` service. In non-interactive mode, a `systemd` service will be prepared where a `/etc/systemd/system/ncs.service.prepare` file is created. The service is not enabled to start at boot. To enable it, rename it to `/etc/systemd/system/ncs.service` and remove the old `/etc/init.d/ncs` SysV service. When using the `--non-interactive` option, the `/etc/systemd/system/ncs.service` file will be overwritten if it already exists.
-
-For more information on the `ncs-installer`, see the [ncs-installer(1)](../../resources/man/ncs-installer.1.md) man page.
-
-For an extensive guide to NSO deployment, refer to [Development to Production Deployment](development-to-production-deployment/)_._
-
-
-
-
-
-Enable Strict Overcommit Accounting on the Host.
-
-By default, the Linux kernel allows overcommit of memory. However, memory overcommit produces an unexpected and unreliable environment for NSO because the Linux Out-Of-Memory (OOM) killer may terminate NSO without restarting it if the system is critically low on memory. Also, when the OOM killer terminates NSO, no system dump file will be produced, and the debug information will be lost. Thus, it is strongly recommended to enable strict overcommit accounting.
-
-#### **Heuristic Overcommit Mode as an Alternative to Strict Overcommit**
-
-The alternative—using heuristic overcommit mode (see below for best‑effort recommendations)—can be useful if the NSO host has severe memory limitations. For example, if RAM sizing for the NSO host did not take into account that the schema (from YANG models) is loaded into memory by NSO Python and Java packages affecting total committed memory (Committed\_AS) and after considering the recommendations in [CDB Stores the YANG Model Schema](../../development/advanced-development/scaling-and-performance-optimization.md#d5e8743).
-
-#### Recommended: Host Configured for Strict Overcommit
-
-* Set `vm.overcommit_memory=2` to enable strict overcommit accounting.
-* Set `vm.overcommit_ratio` so the CommitLimit is approximately equal to physical RAM, with a 5% headroom for the kernel to reduce the risk of system-wide OOM conditions. E.g., 95% of RAM when no swap is present (recommended), or subtract 5 percentage points from the calculated ratio that neutralizes swap. Increase the headroom if the host runs additional services.
-* Alternatively, set `vm.overcommit_kbytes` which takes precedence; `vm.overcommit_ratio` is ignored while `vm.overcommit_kbytes > 0`.
- * When vm.overcommit\_kbytes > 0, it sets a fixed CommitLimit in kB and ignores ratio and swap in the calculation. Note that HugeTLB is not subtracted when overcommit\_kbytes is used (it’s a fixed value).
-* Strongly discourage swap use at runtime by setting `vm.swappiness=1`.
-* If swap must remain enabled system-wide, prevent NSO from using swap by configuring its cgroup with `memory.swap.max=0` (cgroup v2).
-* If swap must be enabled for NSO use a fast disk, for example, an NVMe SSD.
-
-**Apply Immediately**
-
-{% code title="To apply strict overcommit accounting with immediate effect" %}
-```bash
-echo 2 > /proc/sys/vm/overcommit_memory
-```
-{% endcode %}
-
-When `vm.overcommit_memory=2`, the overcommit\_ratio parameter defines the percentage of physical RAM that is available for commit.
-
-The Linux kernel computes the CommitLimit:
-
-CommitLimit = MemTotal × (overcommit\_ratio / 100) + SwapTotal − total\_huge\_TLB
-
-* MemTotal is the total amount of RAM on the system.
-* overcommit\_ratio is the value in `/proc/sys/vm/overcommit_ratio` .
-* SwapTotal is the amount of swap space. Can be 0.
-* total\_huge\_TLB is the amount of memory set aside for huge pages. Can be 0.
-
-The default overcommit\_ratio is 50%. On systems with more than 50% of RAM available, this default can underutilize physical memory.
-
-Do not set `vm.overcommit_ratio=100` as it includes all RAM plus all swap in the CommitLimit and leaves no headroom for the kernel. While swap increases the commit capacity, it is usually slow and should be avoided for NSO.
-
-**Compute overcommit\_ratio to Neutralize Swap**
-
-To allocate physical RAM only in commit accounting and keep a 5-10% headroom for the kernel:
-
-* Compute the base ratio: base\_ratio = 100 × (MemTotal − SwapTotal) / MemTotal.
-* Apply headroom: overcommit\_ratio = floor(base\_ratio) − 5.
-
-Notes:
-
-* overcommit\_ratio is an integer; round down for a bit of extra headroom.
-* Recompute the ratio if RAM or swap changes.
-* If SwapTotal ≥ MemTotal, swap cannot be neutralized via overcommit\_ratio, use overcommit\_kbytes; see Example 3.
-* If the computed value is very low, ensure it still fits your workload requirements.
-
-**Example 1: No Swap, 5% Headroom**
-
-{% code title="Check memory totals" %}
-```bash
-cat /proc/meminfo | grep "MemTotal\|SwapTotal"
-MemTotal: 8039352 kB
-SwapTotal: 0 kB
-```
-{% endcode %}
-
-{% code title="Apply settings with immediate effect" %}
-```bash
-echo 2 > /proc/sys/vm/overcommit_memory
-echo 95 > /proc/sys/vm/overcommit_ratio
-echo 1 > /proc/sys/vm/swappiness
-```
-{% endcode %}
-
-Rationale: With no swap, set overcommit\_ratio=95 to allow \~95% of RAM for user-space commit, leaving \~5% headroom for the kernel.
-
-**Example 2: MemTotal > SwapTotal, Neutralize Swap with 5% Headroom**
-
-{% code title="Check memory totals" %}
-```bash
-cat /proc/meminfo | grep "MemTotal\|SwapTotal"
-MemTotal: 8039352 kB
-SwapTotal: 1048572 kB
-```
-{% endcode %}
-
-Calculate the ratio:
-
-* base\_ratio= 100 × ((8039352 − 1048572) / 8039352) ≈ 86.9%.
-* Apply 5% headroom: overcommit\_ratio = floor(86.9) − 5 = 81.
-
-{% code title="Apply" %}
-```bash
-echo 2 > /proc/sys/vm/overcommit_memory
-echo 81 > /proc/sys/vm/overcommit_ratio
-echo 1 > /proc/sys/vm/swappiness
-```
-{% endcode %}
-
-This keeps the CommitLimit safely below physical RAM to provide kernel headroom and neutralizes swap’s contribution to CommitLimit and then applies 5% headroom toward the commit budget.
-
-**Example 3: SwapTotal ≥ MemTotal (Headroom via ratio not applicable, use overcommit\_kbytes)**
-
-{% code title="Check memory totals" %}
-```bash
-cat /proc/meminfo | grep "MemTotal\|SwapTotal"
-MemTotal: 16000000 kB
-SwapTotal: 16000000 kB
-```
-{% endcode %}
-
-Compute:
-
-* CommitLimit\_kB = floor(MemTotal × 0.95) = floor(16,000,000 × 0.95) = 15,200,000 kB.
-
-{% code title="Apply" %}
-```bash
-echo 2 > /proc/sys/vm/overcommit_memory
-echo 15200000 > /proc/sys/vm/overcommit_kbytes
-echo 1 > /proc/sys/vm/swappiness
-```
-{% endcode %}
-
-Note that overcommit\_kbytes sets a fixed CommitLimit that ignores swap; recompute if RAM changes. Also note the HugeTLB subtraction does not apply when using overcommit\_kbytes (fixed commit budget).
-
-Refer to the Linux [proc\_sys\_vm(5)](https://man7.org/linux/man-pages/man5/proc_sys_vm.5.html) manual page for more details on the overcommit\_memory, overcommit\_ratio, and overcommit\_kbytes parameters.
-
-**Persist Across Reboots**
-
-To ensure the overcommit remains disabled after reboot, add the three lines below to `/etc/sysctl.conf` (or a file under `/etc/sysctl.d/`).
-
-{% code title="Add to /etc/sysctl.conf" %}
-```
-vm.overcommit_memory = 2
-vm.overcommit_ratio = # if not using overcommit_kbytes
-vm.overcommit_kbytes = # if using a fixed CommitLimit
-vm.swappiness = 1
-```
-{% endcode %}
-
-See the Linux [sysctl.conf(5)](https://man7.org/linux/man-pages/man5/sysctl.conf.5.html) manual page for details.
-
-**NSO Crash Dumps**
-
-If NSO aborts due to failure to allocate memory, NSO will produce a system dump by default before aborting. When starting NSO from a non-root user, set the `NCS_DUMP` environment variable to point to a filename in a directory that the non-root user can access. The default setting is `NCS_DUMP=ncs_crash.dump`, where the file is written to the NSO run-time directory, typically `NCS_RUN_DIR=/var/opt/ncs`. If the user running NSO cannot write to the directory that the `NCS_DUMP` environment variable points to, generating the system dump file will fail, and the debug information will be lost.
-
-#### **Alternative: Heuristic Overcommit Mode (vm.overcommit\_memory=0) With Committed\_AS Monitoring**
-
-As an alternative to the recommended strict mode, `vm.overcommit_memory=2`, you can keep `vm.overcommit_memory=0` to allow overcommit of memory and monitor the total committed memory (Committed\_AS) versus CommitLimit using, for example, a best effort script or observability tool. When Committed\_AS crosses a threshold, for example, 90% of CommitLimit, proactively trigger a series of NSO debug dumps every few seconds via `ncs --debug-dump`. Optionally, a second critical threshold, for example, 95% of CommitLimit, proactively trigger NSO to produce a system dump and then exit gracefully.
-
-* This approach does not prevent NSO from getting killed; it attempts to capture diagnostic data before memory pressure becomes critical and the Linux OOM-killer kills NSO.
-* If swap is enabled, prefer vm.swappiness=1 and consider placing NSO in a cgroup with memory.swap.max=0 to avoid swap I/O for NSO. Requires Linux cgroup v2 and a service-managed cgroup (e.g., systemd) support.
-
-- Committed\_AS versus CommitLimit is a more meaningful early‑warning signal than Committed\_AS versus MemTotal, because CommitLimit reflects the kernel’s current overcommit policy, swap availability, and huge page reservations—MemTotal does not.
-- When in Heuristic mode (vm.overcommit\_memory=0): CommitLimit is informative, not enforced. It’s still better than MemTotal for early warning, but OOM can occur before or after you reach it.
-- If necessary for your use-case, complement with MemAvailable, swap activity (vmstat or /proc/vmstat), PSI memory pressure (/proc/pressure/memory), and per‑process/cgroup RSS to catch imminent pressure that Committed\_AS alone may miss.
-- Ensure the user running the monitor has permission to execute `ncs --debug-dump` and write to the chosen dump directory.
-- See "NSO Crash Dumps" above for crash dump details.
-
-{% code title="Simple example script NSO debug-dump monitor" overflow="wrap" %}
-```bash
-#!/usr/bin/env bash
-# Simple NSO debug-dump monitor for heuristic overcommit mode (vm.overcommit_memory=0).
-# Triggers ncs --debug-dump when Committed_AS reaches 90% of CommitLimit.
-# Triggers NSO to produce a system dump before exiting using kill -USR1 when Committed_AS reaches 95% of CommitLimit
-
-THRESHOLD_PCT=90 # Trigger at 90% of CommitLimit (10% headroom).
-CRITICAL_PCT=95 # Trigger at 95% of CommitLimit (5% headroom).
-POLL_INTERVAL=5 # Seconds between checks.
-PROCESS_CHECK_INTERVAL=30
-DUMP_COUNT=10 # Number of dumps to collect.
-DUMP_DELAY=10 # Seconds between dumps.
-DUMP_PREFIX="dump" # Files like dump.1.bin, dump.2.bin, ...
-
-command -v ncs >/dev/null 2>&1 || { echo "ncs command not found in PATH."; exit 1; }
-
-find_nso_pid() {
- pgrep -x ncs.smp | head -n1 || true
-}
-
-while true; do
- pid="$(find_nso_pid)"
- if [ -z "${pid:-}" ]; then
- echo "NSO not running; retry in ${PROCESS_CHECK_INTERVAL}s..."
- sleep "$PROCESS_CHECK_INTERVAL"
- continue
- fi
-
- committed="$(awk '/Committed_AS:/ {print $2}' /proc/meminfo)"
- commit_limit="$(awk '/CommitLimit:/ {print $2}' /proc/meminfo)"
- if [ -z "$committed" ] || [ -z "$commit_limit" ]; then
- echo "Unable to read /proc/meminfo; retry in ${POLL_INTERVAL}s..."
- sleep "$POLL_INTERVAL"
- continue
- fi
-
- threshold=$(( commit_limit * THRESHOLD_PCT / 100 ))
- critical=$(( commit_limit * CRITICAL_PCT / 100 ))
- echo "PID=${pid} Committed_AS=${committed}kB; CommitLimit=${commit_limit}kB; Threshold=${threshold}kB; Critical=${critical}kB."
- if [ "$committed" -ge "$critical" ]; then
- echo "Critical threshold crossed; collect a system dump and stop NSO..."
- kill -USR1 ${pid}
- exit 0
- elif [ "$committed" -ge "$threshold" ]; then
- echo "Threshold crossed; collecting ${DUMP_COUNT} debug dumps..."
- for i in $(seq 1 "$DUMP_COUNT"); do
- file="${DUMP_PREFIX}.${i}.bin"
- echo "Dump $i -> ${file}"
- if ! ncs --debug-dump "$file"; then
- echo "Debug dump $i failed."
- fi
- sleep "$DUMP_DELAY"
- done
- echo "All debug dumps completed; exiting."
- exit 0
- fi
-
- sleep "$POLL_INTERVAL"
-done
-```
-{% endcode %}
-
-
-
-{% hint style="info" %}
-Some older NSO releases expect the `/etc/init.d/` folder to exist in the host operating system. If the folder does not exist, the installer may fail to successfully install NSO. A workaround that allows the installer to proceed is to create the folder manually, but the NSO process will not automatically start at boot.
-{% endhint %}
-
-### Step 5 - Set Up User Access
-
-The installation is configured for PAM authentication, with group assignment based on the OS group database (e.g. `/etc/group` file). Users that need access to NSO must belong to either the `ncsadmin` group (for unlimited access rights) or the `ncsoper` group (for minimal access rights).
-
-To set up user access:
-
-1. To create the `ncsadmin` group, use the OS shell command:
-
- ```bash
- # groupadd ncsadmin
- ```
-2. To create the `ncsoper` group, use the OS shell command:
-
- ```bash
- # groupadd ncsoper
- ```
-3. To add an existing user to one of these groups, use the OS shell command:
-
- ```bash
- # usermod -a -G 'groupname' 'username'
- ```
-
-### Step 6 - Set Environment Variables
-
-To set environment variables:
-
-1. Change to Super User privileges.
-
- ```bash
- $ sudo -s
- ```
-2. The installation program creates a shell script file in each NSO installation which sets the environment variables needed to run NSO. With the `--system-install` option, by default, these settings are set on the shell. To explicitly set the variables, source `ncs.sh` or `ncs.csh` depending on your shell type.
-
- ```bash
- # source /etc/profile.d/ncs.sh
- ```
-3. Start NSO.
-
- ```bash
- # systemctl daemon-reload
- # systemctl start ncs
- ```
-
- NSO starts at boot going forward.
-
- Once you log on with the user that belongs to `ncsadmin` or `ncsoper`, you can directly access the CLI as shown below:
-
- ```bash
- $ ncs_cli -C
- ```
-
-### Step 7 - Runtime Directory Creation
-
-As part of the System Install, the NSO daemon `ncs` is automatically started at boot time. You do not need to create a Runtime Directory for System Install.
-
-### Step 8 - Generate License Registration Token
-
-To conclude the NSO installation, a license registration token must be created using a (CSSM) account. This is because NSO uses [Cisco Smart Licensing](../management/system-management/cisco-smart-licensing.md) to make it easy to deploy and manage NSO license entitlements. Login credentials to the [Cisco Smart Software Manager](https://www.cisco.com/c/en/us/buy/smart-accounts/software-manager.html) (CSSM) account are provided by your Cisco contact and detailed instructions on how to [create a registration token](../management/system-management/cisco-smart-licensing.md#d5e2927) can be found in the Cisco Smart Licensing. General licensing information covering licensing models, how licensing works, usage compliance, etc., is covered in the [Cisco Software Licensing Guide](https://www.cisco.com/c/en/us/buy/licensing/licensing-guide.html).
-
-To generate a license registration token:
-
-1. When you have a token, start a Cisco CLI towards NSO and enter the token, for example:
-
- ```bash
- $ ncs_cli -Cu admin
- admin@ncs# license smart register idtoken
- YzIzMDM3MTgtZTRkNC00YjkxLTk2ODQtOGEzMTM3OTg5MG
-
- Registration process in progress.
- Use the 'show license status' command to check the progress and result.
- ```
-
- \
- Upon successful registration, NSO automatically requests a license entitlement for its own instance and for the number of devices it orchestrates and their NED types. If development mode has been enabled, only development entitlement for the NSO instance itself is requested.
-2. Inspect the requested entitlements using the command `show license all` (or by inspecting the NSO daemon log). An example output is shown below.
-
- ```bash
- admin@ncs# show license all
- ...
- 21-Apr-2016::11:29:18.022 miosaterm confd[8226]:
- Smart Licensing Global Notification:
- type = "notifyRegisterSuccess",
- agentID = "sa1",
- enforceMode = "notApplicable",
- allowRestricted = false,
- failReasonCode = "success",
- failMessage = "Successful."
- 21-Apr-2016::11:29:23.029 miosaterm confd[8226]:
- Smart Licensing Entitlement Notification: type = "notifyEnforcementMode",
- agentID = "sa1",
- notificationTime = "Apr 21 11:29:20 2016",
- version = "1.0",
- displayName = "regid.2015-10.com.cisco.NSO-network-element",
- requestedDate = "Apr 21 11:26:19 2016",
- tag = "regid.2015-10.com.cisco.NSO-network-element",
- enforceMode = "inCompliance",
- daysLeft = 90,
- expiryDate = "Jul 20 11:26:19 2016",
- requestedCount = 8
- ...
- ```
-
-
-
-Evaluation Period
-
-If no registration token is provided, NSO enters a 90-day evaluation period and the remaining evaluation time is recorded hourly in the NSO daemon log:
-
-```
- ...
- 13-Apr-2016::13:22:29.178 miosaterm confd[16260]:
-Starting the NCS Smart Licensing Java VM
- 13-Apr-2016::13:22:34.737 miosaterm confd[16260]:
-Smart Licensing evaluation time remaining: 90d 0h 0m 0s
-...
- 13-Apr-2016::13:22:34.737 miosaterm confd[16260]:
-Smart Licensing evaluation time remaining: 89d 23h 0m 0s
-...
-```
-
-
-
-
-
-Communication Send Error
-
-During upgrades, if you experience a 'Communication Send Error' during license registration, restart the Smart Agent.
-
-
-
-
-
-If You are Unable to Access Cisco Smart Software Manager
-
-In a situation where the NSO instance has no direct access to the Cisco Smart Software Manager, one option is the [Cisco Smart Software Manager Satellite](https://software.cisco.com/software/csws/ws/platform/home) which can be installed to manage software licenses on the premises. Install the satellite and use the command `call-home destination address http ` to point to the satellite.
-
-Another option when direct access is not desired is to configure an HTTP or HTTPS proxy, e.g., `smart-license smart-agent proxy url https://127.0.0.1:8080`. If you plan to do this, take the note below regarding ignored CLI configurations into account:
-
-If `ncs.conf` contains a configuration for any of the java-executable, java-options, override-url/url, or proxy/url under the configure path `/ncs-config/smart-license/smart-agent/`, then any corresponding configuration done via the CLI is ignored.
-
-
-
-
-
-License Registration in HA Mode
-
-When configuring NSO in High Availability (HA) mode, the license registration token must be provided to the CLI running on the primary node. Read more about HA and node types in [High Availability](../management/high-availability.md)_._
-
-
-
-
-
-Licensing Log
-
-Licensing activities are also logged in the NSO daemon log as described in [Monitoring NSO](../management/system-management/#d5e7876). For example, a successful token registration results in the following log entry:
-
-```
- 21-Apr-2016::11:29:18.022 miosaterm confd[8226]:
-Smart Licensing Global Notification:
-type = "notifyRegisterSuccess"
-```
-
-
-
-
-
-Check Registration Status
-
-To check the registration status, use the command `show license status`.
-
-```bash
-admin@ncs# show license status
-
-Smart Licensing is ENABLED
-
-Registration:
-Status: REGISTERED
-Smart Account: Network Services Orchestrator
-Virtual Account: Default
-Export-Controlled Functionality: Allowed
-Initial Registration: SUCCEEDED on Apr 21 09:29:11 2016 UTC
-Last Renewal Attempt: SUCCEEDED on Apr 21 09:29:16 2016 UTC
-Next Renewal Attempt: Oct 18 09:29:16 2016 UTC
-Registration Expires: Apr 21 09:26:13 2017 UTC
-Export-Controlled Functionality: Allowed
-
-License Authorization:
-
-License Authorization:
-Status: IN COMPLIANCE on Apr 21 09:29:18 2016 UTC
-Last Communication Attempt: SUCCEEDED on Apr 21 09:26:30 2016 UTC
-Next Communication Attempt: Apr 21 21:29:32 2016 UTC
-Communication Deadline: Apr 21 09:26:13 2017 UTC
-```
-
-
-
-## System Install FAQs
-
-Frequently Asked Questions (FAQs) about System Install.
-
-
-
-Is there a dependency between the NSO Installation Directory and Runtime Directory?
-
-No, there is no such dependency.
-
-
-
-
-
-Do you need to source the ncsrc file before starting NSO?
-
-No. By default, the environment variables are configured and set on the shell with System Install.
-
-
-
-
-
-Can you start NSO from a directory that is not an NSO runtime directory?
-
-Yes.
-
-
-
-
-
-Can you stop NSO from a directory that is not an NSO runtime directory?
-
-Yes.
-
-
-
-
-
-For evaluation and development purposes, instead of a Local Install, you performed a System Install. Now you cannot build or run NSO examples as described in README files. How can you proceed further?
-
-The easiest way is to uninstall the System install using `ncs-uninstall --all` and do a Local Install from scratch.
-
-
-
-
-
-Can we move NSO Installation from one folder to another ?
-
-No.
-
-
diff --git a/administration/installation-and-deployment/upgrade-nso.md b/administration/installation-and-deployment/upgrade-nso.md
deleted file mode 100644
index 13b05ed4..00000000
--- a/administration/installation-and-deployment/upgrade-nso.md
+++ /dev/null
@@ -1,501 +0,0 @@
----
-description: Upgrade NSO to a higher version.
----
-
-# Upgrade NSO
-
-Upgrading the NSO software gives you access to new features and product improvements. Every change carries a risk, and upgrades are no exception. To minimize the risk and make the upgrade process as painless as possible, this section describes the recommended procedures and practices to follow during an upgrade.
-
-As usual, sufficient preparation avoids many pitfalls and makes the process more straightforward and less stressful.
-
-## Preparing for Upgrade
-
-There are multiple aspects that you should consider before starting with the actual upgrade procedure. While the development team tries to provide as much compatibility between software releases as possible, they cannot always avoid all incompatible changes. For example, when a deviation from an RFC standard is found and resolved, it may break clients that depend on the non-standard behavior. For this reason, a distinction is made between maintenance and a major NSO upgrade.
-
-A maintenance NSO upgrade is within the same branch, i.e., when the first two version numbers stay the same (x.y in the x.y.z NSO version). An example is upgrading from version 6.2.1 to 6.2.2. In the case of a maintenance upgrade, the NSO release contains only corrections and minor enhancements, minimizing the changes. It includes binary compatibility for packages, so there is no need to recompile the .fxs files for a maintenance upgrade.
-
-Correspondingly, when the first or second number in the version changes, that is called a full or major upgrade. For example, upgrading version 6.3.1 to 6.4 is a major, non-maintenance upgrade. Due to new features, packages must be recompiled, and some incompatibilities could manifest.
-
-In addition to the above, a package upgrade is when you replace a package with a newer version, such as a NED or a service package. Sometimes, when package changes are not too big, it is possible to supply the new packages as part of the NSO upgrade, but this approach brings additional complexity. Instead, package upgrade and NSO upgrade should in general, be performed as separate actions and are covered as such.
-
-To avoid surprises during any upgrade, first ensure the following:
-
-* Hosts have sufficient disk space, as some additional space is required for an upgrade.
-* The software is compatible with the target OS. However, sometimes a newer version of Java or system libraries, such as glibc, may be required.
-* All the required NEDs and custom packages are compatible with the target NSO version. If you're planning to run the upgraded version in FIPS-compliant mode, make sure to upgrade the NEDs to the latest version.
-* Existing packages have been compiled for the new version and are available to you during the upgrade.
-* Check whether the existing `ncs.conf` file can be used as-is or needs updating. For example, stronger encryption algorithms may require you to configure additional keying material.
-* Review the `CHANGES` file for information on what has changed.
-* If upgrading from a no longer supported software version, verify that the upgrade can be performed directly. In situations where the currently installed version is very old, you may have to upgrade to one or more intermediate versions before upgrading to the target version.
-
-In case it turns out that any of the packages are incompatible or cannot be recompiled, you will need to contact the package developers for an updated or recompiled version. For an official Cisco-supplied package, it is recommended that you always obtain a pre-compiled version if it is available for the target NSO release, instead of compiling the package yourself.
-
-Additional preparation steps may be required based on the upgrade and the actual setup, such as when using the Layered Service Architecture (LSA) feature. In particular, for a major NSO upgrade in a multi-version LSA cluster, ensure that the new version supports the other cluster members and follow the additional steps outlined in [Deploying LSA](../advanced-topics/layered-service-architecture.md#deploying-lsa) in Layered Service Architecture.
-
-If you use the High Availability (HA) feature, the upgrade consists of multiple steps on different nodes. To avoid mistakes, you are encouraged to script the process, for which you will need to set up and verify access to all NSO instances with either `ssh`, `nct`, or some other remote management command. For the reference example, we use in this chapter, see [examples.ncs/high-availability/hcc](https://github.com/NSO-developer/nso-examples/tree/6.6/high-availability/hcc). The management station uses shell and Python scripts that use `ssh` to access the Linux shell and NSO CLI and Python Requests for NSO RESTCONF interface access.
-
-Likewise, NSO 5.3 added support for 256-bit AES encrypted strings, requiring the AES256CFB128 key in the `ncs.conf` configuration. You can generate one with the `openssl rand -hex 32` or a similar command. Alternatively, if you use an external command to provide keys, ensure that it includes a value for an `AES256CFB128_KEY` in the output.
-
-With regard to init system, NSO 6.4 introduces `systemd` as the default option instead of SysV. In interactive mode, when upgrading to NSO 6.4 and later, the installer prompts the user to continue using the old SysV service or prepare a `systemd` service. In non-interactive mode, a `systemd` service is prepared by default. When using the `--non-interactive` option, the `/etc/systemd/system/ncs.service` file will be overwritten if it already exists.
-
-Finally, regardless of the upgrade type, ensure that you have a working backup and can easily restore the previous configuration if needed, as described in [Backup and Restore](../management/system-management/#backup-and-restore).
-
-{% hint style="danger" %}
-**Caution**
-
-The `ncs-backup` (and consequently the `nct backup`) command does not back up the `/opt/ncs/packages` folder. If you make any file changes, back them up separately.
-
-However, the best practice is not to modify packages in the `/opt/ncs/packages` folder. Instead, if an upgrade requires package recompilation, separate package folders (or files) should be used, one for each NSO version.
-{% endhint %}
-
-## Single Instance Upgrade
-
-The upgrade of a single NSO instance requires the following steps:
-
-1. Create a backup.
-2. Perform a System Install of the new version.
-3. Stop the old NSO server process.
-4. Compact the CDB files write log.
-5. Update the `/opt/ncs/current` symbolic link.
-6. If required, update the `ncs.conf` configuration file.
-7. Update the packages in `/var/opt/ncs/packages/` if recompilation is needed.
-8. Start the NSO server process, instructing it to reload the packages.
-
-{% hint style="info" %}
-The following steps assume that you are upgrading to the 6.5 release. They pertain to a System Install of NSO, and you must perform them with Super User privileges.
-
-If you're upgrading from a non-FIPS setup to a [FIPS](https://www.nist.gov/itl/publications-0/federal-information-processing-standards-fips)-compliant setup, ensure that the system requirements comply to FIPS mode install. This entails considering FIPS compliance at OS level as well as configuring NSO to use only FIPS-validated algorithms for keys and certificates.
-{% endhint %}
-
-{% stepper %}
-{% step %}
-As a best practice, always create a backup before trying to upgrade.
-
-```bash
-# ncs-backup
-```
-{% endstep %}
-
-{% step %}
-For the upgrade itself, you must first download to the host and install the new NSO release. At this point, you can choose to install NSO in standard mode or in FIPS mode.
-
-{% tabs %}
-{% tab title="Standard System Install" %}
-The standard mode is the regular NSO install and is suitable for most installations. FIPS is disabled in this mode.
-
-For standard NSO installation, run the installer as below:
-
-```bash
-# sh nso-6.5.linux.x86_64.installer.bin --system-install
-```
-{% endtab %}
-
-{% tab title="FIPS System Install" %}
-FIPS mode creates a FIPS-compliant NSO install.
-
-FIPS mode should only be used for deployments that are subject to strict compliance regulations as the cryptographic functions are then confined to the CiscoSSL FIPS 140-3 module library.
-
-For FIPS-compliant NSO install, run the installer with the additional `--fips-install` flag. Afterwards, if needed, enable FIPS in `ncs.conf` as described further below.
-
-```bash
-# sh nso-6.5.linux.x86_64.installer.bin --system-install --fips-install
-```
-{% endtab %}
-{% endtabs %}
-{% endstep %}
-
-{% step %}
-Stop the currently running server with the help of `systemd` or an equivalent command relevant to your system.
-
-```bash
-# systemctl stop ncs
-Stopping ncs: .
-```
-{% endstep %}
-
-{% step %}
-Compact the CDB files write log using, for example, the `ncs --cdb-compact $NCS_RUN_DIR/cdb` command.
-{% endstep %}
-
-{% step %}
-Next, you update the symbolic link for the currently selected version to point to the newly installed one, 6.5 in this case.
-
-```bash
-# cd /opt/ncs
-# rm -f current
-# ln -s ncs-6.5 current
-```
-{% endstep %}
-
-{% step %}
-While seldom necessary, at this point, you would also update the `/etc/ncs/ncs.conf` file. If you ran the installer with FIPS mode, update `ncs.conf` accordingly.
-
-{% hint style="info" %}
-**NSO Configuration for FIPS**
-
-Note the following as part of FIPS-specific configuration:
-
-1. If you're upgrading from a non-FIPS version (e.g., 6.4) to a FIPS-compliant version (e.g., 6.5), the following `ncs.conf` entry needs to be manually added to enable FIPS. Afterwards, upon upgrading between FIPS-compliant versions, the existing entry automatically updates, eliminating the need for any manual intervention.
-
-```xml
-
- true
-
-```
-
-2. Additional environment variables (`NCS_OPENSSL_CONF_INCLUDE`, `NCS_OPENSSL_CONF`, `NCS_OPENSSL_MODULES`) are configured in `ncsrc` for FIPS compliance.
-3. The default `crypto.so` is overwritten at install for FIPS compliance.
-
-Additionally, note that:
-
-* As certain algorithms typically available with CiscoSSL are not included in the FIPS 140-3 validated module (and therefore disabled in FIPS mode), you need to configure NSO to use only the algorithms and cryptographic suites available through the CiscoSSL FIPS 140-3 object module.
-* With FIPS, NSO signals the NEDs to operate in FIPS mode using Bouncy Castle FIPS libraries for Java-based components, ensuring compliance with FIPS 140-3. To support this, NED packages may also require upgrading, as older versions — particularly SSH-based NEDs — often lack the necessary FIPS signaling or Bouncy Castle support required for cryptographic compliance.
-* Configure SSH keys in `ncs.conf` and `init.xml`.
-{% endhint %}
-{% endstep %}
-
-{% step %}
-Now, ensure that the `/var/opt/ncs/packages/` directory has appropriate packages for the new version. It should be possible to continue using the same packages for a maintenance upgrade. But for a major upgrade, you must normally rebuild the packages or use pre-built ones for the new version. You must ensure this directory contains the exact same version of each existing package, compiled for the new release, and nothing else.
-
-As a best practice, the available packages are kept in `/opt/ncs/packages/` and `/var/opt/ncs/packages/` only contains symbolic links. In this case, to identify the release for which they were compiled, the package file names all start with the corresponding NSO version. Then, you only need to rearrange the symbolic links in the `/var/opt/ncs/packages/` directory.
-
-```bash
-# cd /var/opt/ncs/packages/
-# rm -f *
-# for pkg in /opt/ncs/packages/ncs-6.5-*; do ln -s $pkg; done
-```
-
-{% hint style="warning" %}
-Please note that the above package naming scheme is neither required nor enforced. If your package filesystem names differ from it, you will need to adjust the preceding command accordingly.
-{% endhint %}
-{% endstep %}
-
-{% step %}
-Finally, you start the new version of the NSO server with the `package reload` flag set. Set `NCS_RELOAD_PACKAGES=true` in `/etc/ncs/ncs.systemd.conf` and start NSO:
-
-```bash
-# systemctl start ncs
-Starting ncs: ...
-```
-
-Set the `NCS_RELOAD_PACKAGES` variable in `/etc/ncs/ncs.systemd.conf` back to its previous value or the system would keep performing a packages reload at subsequent starts.
-
-NSO will perform the necessary data upgrade automatically. However, this process may fail if you have changed or removed any packages. In that case, ensure that the correct versions of all packages are present in `/var/opt/ncs/packages/` and retry the preceding command.
-
-Also, note that with many packages or data entries in the CDB, this process could take more than 90 seconds and result in the following error message:
-
-```
-Starting ncs (via systemctl): Job for ncs.service failed
-because a timeout was exceeded. See "systemctl status
-ncs.service" and "journalctl -xe" for details. [FAILED]
-```
-
-The above error does not imply that NSO failed to start, just that it took longer than 90 seconds. Therefore, it is recommended you wait some additional time before verifying.
-{% endstep %}
-{% endstepper %}
-
-## Recover from a Failed Upgrade
-
-It is imperative that you have a working copy of data available from which you can restore. That is why you must always create a backup before starting an upgrade. Only a backup guarantees that you can rerun the upgrade or back out of it, should it be necessary.
-
-The same steps can also be used to restore data on a new, similar host if the OS of the initial host becomes corrupted beyond repair.
-
-1. First, stop the NSO process if it is running.
-
- ```bash
- # systemctl stop ncs
- Stopping ncs: .
- ```
-2. Verify and, if necessary, revert the symbolic link in `/opt/ncs/` to point to the initial NSO release.
-
- ```bash
- # cd /opt/ncs
- # ls -l current
- # ln -s ncs-VERSION current
- ```
-
- \
- In the exceptional case where the initial version installation was removed or damaged, you will need to re-install it first and redo the step above.
-3. Verify if the correct (initial) version of NSO is being used.
-
- ```bash
- # ncs --version
- ```
-4. Next, restore the backup.
-
- ```bash
- # ncs-backup --restore
- ```
-5. Finally, start the NSO server and verify the restore was successful.
-
- ```bash
- # systemctl start ncs
- Starting ncs: .
- ```
-
-## NSO HA Version Upgrade
-
-Upgrading NSO in a highly available (HA) setup is a staged process. It entails running various commands across multiple NSO instances at different times.
-
-The procedure described in this section is used with the rule-based built-in HA clusters. For HA Raft cluster instructions, refer to [Version Upgrade of Cluster Nodes](../management/high-availability.md) in the HA documentation.
-
-The procedure is almost the same for a maintenance and major NSO upgrade. The difference is that a major upgrade requires the replacement of packages with recompiled ones. Still, a maintenance upgrade is often perceived as easier because there are fewer changes in the product.
-
-The stages of the upgrade are:
-
-1. First, enable read-only mode on the designated `primary`, and then on the `secondary` that is enabled for fail-over.
-2. Take a full backup on all nodes.
-3. If using a 3-node setup, disconnect the 3rd, non-fail-over `secondary` by disabling HA on this node.
-4. Disconnect the HA pair by disabling HA on the designated `primary`, temporarily promoting the designated `secondary` to provide the read-only service (and advertise the shared virtual IP address if it is used).
-5. Upgrade the designated `primary`.
-6. Disable HA on the designated `secondary` node, to allow designated `primary` to become actual `primary` in the next step.
-7. Activate HA on the designated `primary`, which will assume its assigned (`primary`) role to provide the full service (and again advertise the shared IP if used). However, at this point, the system is without HA.
-8. Upgrade the designated `secondary` node.
-9. Activate HA on the designated `secondary`, which will assume its assigned (`secondary`) role, connecting HA again.
-10. Verify that HA is operational and has converged.
-11. Upgrade the 3rd, non-fail-over `secondary` if it is used, and verify it successfully rejoins the HA cluster.
-
-Enabling the read-only mode on both nodes is required to ensure the subsequent backup captures the full system state, as well as making sure the `failover-primary` does not start taking writes when it is promoted later on.
-
-Disabling the non-fail-over `secondary` in a 3-node setup right after taking a backup is necessary when using the built-in HA rule-based algorithm (enabled by default in NSO 5.8 and later). Without it, the node might connect to the `failover-primary` when the failover happens, which disables read-only mode.
-
-While not strictly necessary, explicitly promoting the designated `secondary` after disabling HA on the `primary` ensures a fast failover, avoiding the automatic reconnection attempts. If using a shared IP solution, such as the Tail-f HCC, this makes sure the shared VIP comes back up on the designated `secondary` as soon as possible. In addition, some older NSO versions do not reset the read-only mode upon disabling HA if they are not acting `primary`.
-
-Another important thing to note is that all packages used in the upgrade must match the NSO release. If they do not, the upgrade will fail.
-
-In the case of a major upgrade, you must recompile the packages for the new version. It is highly recommended that you use pre-compiled packages and do not compile them during this upgrade procedure since the compilation can prove nontrivial, and the production hosts may lack all the required (development) tooling. You should use a naming scheme to distinguish between packages compiled for different NSO versions. A good option is for package file names to start with the `ncs-MAJORVERSION-` prefix for a given major NSO version. This ensures multiple packages can co-exist in the `/opt/ncs/packages` folder, and the NSO version they can be used with becomes obvious.
-
-The following is a transcript of a sample upgrade procedure, showing the commands for each step described above, in a 2-node HA setup, with nodes in their initial designated state. The procedure ensures that this is also the case in the end.
-
-```xml
-
-admin@ncs# show high-availability status mode
-high-availability status mode primary
-admin@ncs# high-availability read-only mode true
-
-
-admin@ncs# show high-availability status mode
-high-availability status mode secondary
-admin@ncs# high-availability read-only mode true
-
-
-# ncs-backup
-
-
-# ncs-backup
-
-
-admin@ncs# high-availability disable
-
-
-admin@ncs# high-availability be-primary
-
-
-#
-#
-# systemctl restart ncs
-#
-
-
-admin@ncs# high-availability disable
-
-
-admin@ncs# high-availability enable
-
-
-#
-#
-# systemctl restart ncs
-#
-
-
-admin@ncs# high-availability enable
-```
-
-Scripting is a recommended way to upgrade the NSO version of an HA cluster. The following example script shows the required commands and can serve as a basis for your own customized upgrade script. In particular, the script requires a specific package naming convention above, and you may need to tailor it to your environment. In addition, it expects the new release version and the designated `primary` and `secondary` node addresses as the arguments. The recompiled packages are read from the `packages-MAJORVERSION/` directory.
-
-For the below example script, we configured our `primary` and `secondary` nodes with their nominal roles that they assume at startup and when HA is enabled. Automatic failover is also enabled so that the `secondary` will assume the `primary` role if the `primary` node goes down.
-
-{% code title="Configuration on Both Nodes" %}
-```xml
-
-
-
- n1
- primary
-
-
- n2
- secondary
- true
-
-
- true
-
- true
- true
-
-
-
-
-```
-{% endcode %}
-
-{% code title="Script for HA Major Upgrade (with Packages)" %}
-```
-#!/bin/bash
-set -ex
-
-vsn=$1
-primary=$2
-secondary=$3
-installer_file=nso-${vsn}.linux.x86_64.installer.bin
-pkg_vsn=$(echo $vsn | sed -e 's/^\([0-9]\+\.[0-9]\+\).*/\1/')
-pkg_dir="packages-${pkg_vsn}"
-
-function on_primary() { ssh $primary "$@" ; }
-function on_secondary() { ssh $secondary "$@" ; }
-function on_primary_cli() { ssh -p 2024 $primary "$@" ; }
-function on_secondary_cli() { ssh -p 2024 $secondary "$@" ; }
-
-function upgrade_nso() {
- target=$1
- scp $installer_file $target:
- ssh $target "sh $installer_file --system-install --non-interactive"
- ssh $target "rm -f /opt/ncs/current && \
- ln -s /opt/ncs/ncs-${vsn} /opt/ncs/current"
-}
-function upgrade_packages() {
- target=$1
- do_pkgs=$(ls "${pkg_dir}/" || echo "")
- if [ -n "${do_pkgs}" ] ; then
- cd ${pkg_dir}
- ssh $target 'rm -rf /var/opt/ncs/packages/*'
- for p in ncs-${pkg_vsn}-*.gz; do
- scp $p $target:/opt/ncs/packages/
- ssh $target "ln -s /opt/ncs/packages/$p /var/opt/ncs/packages/"
- done
- cd -
- fi
-}
-
-# Perform the actual procedure
-
-on_primary_cli 'request high-availability read-only mode true'
-on_secondary_cli 'request high-availability read-only mode true'
-
-on_primary 'ncs-backup'
-on_secondary 'ncs-backup'
-
-on_primary_cli 'request high-availability disable'
-on_secondary_cli 'request high-availability be-primary'
-upgrade_nso $primary
-upgrade_packages $primary
-on_primary `mv /etc/ncs/ncs.systemd.conf /etc/ncs/ncs.systemd.conf.bak'
-on_primary 'echo "NCS_RELOAD_PACKAGES=true" > /etc/ncs/ncs.systemd.conf`
-on_primary 'systemctl restart ncs'
-on_primary `mv /etc/ncs/ncs.systemd.conf.bak /etc/ncs/ncs.systemd.conf'
-
-
-on_secondary_cli 'request high-availability disable'
-on_primary_cli 'request high-availability enable'
-upgrade_nso $secondary
-upgrade_packages $secondary
-on_secondary `mv /etc/ncs/ncs.systemd.conf /etc/ncs/ncs.systemd.conf.bak'
-on_secondary 'echo "NCS_RELOAD_PACKAGES=true" > /etc/ncs/ncs.systemd.conf`
-on_secondary 'systemctl restart ncs'
-on_secondary `mv /etc/ncs/ncs.systemd.conf.bak /etc/ncs/ncs.systemd.conf'
-
-on_secondary_cli 'request high-availability enable'
-```
-{% endcode %}
-
-Once the script is completed, it is paramount that you manually verify the outcome. First, check that the HA is enabled by using the `show high-availability` command on the CLI of each node. Then connect to the designated secondaries and ensure they have the complete latest copy of the data, synchronized from the primaries.
-
-After the `primary` node is upgraded and restarted, the read-only mode is automatically disabled. This allows the `primary` node to start processing writes, minimizing downtime. However, there is no HA. Should the `primary` fail at this point or you need to revert to a pre-upgrade backup, the new writes would be lost. To avoid this scenario, again enable read-only mode on the `primary` after re-enabling HA. Then disable read-only mode only after successfully upgrading and reconnecting the `secondary`.
-
-To further reduce time spent upgrading, you can customize the script to install the new NSO release and copy packages beforehand. Then, you only need to switch the symbolic links and restart the NSO process to use the new version.
-
-You can use the same script for a maintenance upgrade as-is, with an empty `packages-MAJORVERSION` directory, or remove the `upgrade_packages` calls from the script.
-
-Example implementations that use scripts to upgrade a 2- and 3-node setup using CLI/MAAPI or RESTCONF are available in the NSO example set under [examples.ncs/high-availability](https://github.com/NSO-developer/nso-examples/tree/6.6/high-availability).
-
-We have been using a two-node HCC layer-2 upgrade reference example elsewhere in the documentation to demonstrate installing NSO and adding the initial configuration. The upgrade-l2 example referenced in [examples.ncs/high-availability/hcc](https://github.com/NSO-developer/nso-examples/tree/6.6/high-availability/hcc) implements shell and Python scripted steps to upgrade the NSO version using `ssh` to the Linux shell and the NSO CLI or Python Requests RESTCONF for accessing the `paris` and `london` nodes. See the example for details.
-
-If you do not wish to automate the upgrade process, you will need to follow the instructions from [Single Instance Upgrade](upgrade-nso.md#ug.admin_guide.manual_upgrade) and transfer the required files to each host manually. Additional information on HA is available in [High Availability](../management/high-availability.md). However, you can run the `high-availability` actions from the preceding script on the NSO CLI as-is. In this case, please take special care of which host you perform each command, as it can be easy to mix them up.
-
-## Package Upgrade
-
-Package upgrades are frequent and routine in development but require the same care as NSO upgrades in the production environment. The reason is that the new packages may contain an updated YANG model, resulting in a data upgrade process similar to a version upgrade. So, if a package is removed or uninstalled and a replacement is not provided, package-specific data, such as service instance data, will also be removed.
-
-In a single-node environment, the procedure is straightforward. Create a backup with the `ncs-backup` command and ensure the new package is compiled for the current NSO version and available under the `/opt/ncs/packages` directory. Then either manually rearrange the symbolic links in the `/var/opt/ncs/packages` directory or use the `software packages install` command in the NSO CLI. Finally, invoke the `packages reload` command. For example:
-
-```bash
-# ncs-backup
-INFO Backup /var/opt/ncs/backups/ncs-6.4@2024-04-21T10:34:42.backup.gz created
-successfully
-# ls /opt/ncs/packages
-ncs-6.4-router-nc-1.0 ncs-6.4-router-nc-1.0.2
-# ncs_cli -C
-admin@ncs# software packages install package router-nc-1.0.2 replace-existing
-installed ncs-6.4-router-nc-1.0.2
-admin@ncs# packages reload
-
->>> System upgrade is starting.
->>> Sessions in configure mode must exit to operational mode.
->>> No configuration changes can be performed until upgrade has completed.
->>> System upgrade has completed successfully.
-reload-result {
- package router-nc-1.0.2
- result true
-}
-```
-
-On the other hand, upgrading packages in an HA setup is an error-prone process. Thus, NSO provides an action, `packages ha sync and-reload`to minimize such complexity. It is considerably faster and more efficient than upgrading one node at a time.
-
-{% hint style="info" %}
-If the only change in the packages is the addition of new NED packages, the `and-add` can replace `and-reload` command for an even more optimized and less intrusive update. See [Adding NED Packages](../management/package-mgmt.md#ug.package_mgmt.ned_package_add) for details.
-{% endhint %}
-
-The action executes on the `primary` node. First, it syncs the physical packages from the `primary` node to the `secondary` nodes as tar archive files, regardless if the packages were initially added as directories or tar archives. Then, it performs the upgrade on all nodes in one go. The action does not sync packages to or upgrade nodes with the `none` role.
-
-The `packages ha sync` action only distributes new packages to the _secondary_ nodes. If a package already exists on the `secondary` node, it will replace it with the one on the `primary` node. Deleting a package on the `primary` node will also delete it on the `secondary` node. Packages found in load paths under the installation destination (by default `/opt/ncs/current`) are not distributed as they belong to the system and should not differ between the `primary` and the `secondary` nodes.
-
-It is crucial to ensure that the load path configuration is identical on both `primary` and `secondary` nodes. Otherwise, the distribution will not start, and the action output will contain detailed error information.
-
-Using the `and-reload` parameter with the action starts the upgrade once packages are copied over. The action sets the `primary` node to read-only mode. After the upgrade is successfully completed, the node is set back to its previous mode.
-
-If the parameter `and-reload` is also supplied with the `wait-commit-queue-empty` parameter, it will wait for the commit queue to become empty on the `primary` node and prevent other queue items from being added while the queue is being drained.
-
-Using the `wait-commit-queue-empty` parameter is the recommended approach, as it minimizes the risk of the upgrade failing due to commit queue items still relying on the old schema.
-
-{% code title="Package Upgrade Procedure" %}
-```bash
-primary@node1# software packages list
-package {
- name dummy-1.0.tar.gz
- loaded
-}
-primary@node1# software packages fetch package-from-file \
-$MY_PACKAGE_STORE/dummy-1.1.tar.gz
-primary@node1# software packages install package dummy-1.1 replace-existing
-primary@node1# packages ha sync and-reload { wait-commit-queue-empty }
-```
-{% endcode %}
-
-The `packages ha sync and-reload` command has the following known limitations and side effects:
-
-* The `primary` node is set to `read-only` mode before the upgrade starts, and it is set back to its previous mode if the upgrade is successfully upgraded. However, the node will always be in read-write mode if an error occurs during the upgrade. It is up to the user to set the node back to the desired mode by using the `high-availability read-only mode` command.
-* As a best practice, you should create a backup of all nodes before upgrading. This action creates no backups, you must do that explicitly.
-
-Example implementations that use scripts to upgrade a 2- and 3-node setup using CLI/MAAPI or RESTCONF are available in the NSO example set under [examples.ncs/high-availability](https://github.com/NSO-developer/nso-examples/tree/6.6/high-availabilit).
-
-We have been using a two-node HCC layer 2 upgrade reference example elsewhere in the documentation to demonstrate installing NSO and adding the initial configuration. The `upgrade-l2` example referenced in [examples.ncs/high-availability/hcc](https://github.com/NSO-developer/nso-examples/tree/6.6/high-availability/hcc) implements shell and Python scripted steps to upgrade the `primary` `paris` package versions and sync the packages to the `secondary` `london` using `ssh` to the Linux shell and the NSO CLI or Python Requests RESTCONF for accessing the `paris` and `london` nodes. See the example for details.
-
-In some cases, NSO may warn when the upgrade looks suspicious. For more information on this, see [Loading Packages](../management/package-mgmt.md#ug.package_mgmt.loading). If you understand the implications and are willing to risk losing data, use the `force` option with `packages reload` or set the `NCS_RELOAD_PACKAGES` environment variable to `force` when restarting NSO. It will force NSO to ignore warnings and proceed with the upgrade. In general, this is not recommended.
-
-In addition, you must take special care of NED upgrades because services depend on them. For example, since NSO 5 introduced the CDM feature, which allows loading multiple versions of a NED, a major NED upgrade requires a procedure involving the `migrate` action.
-
-When a NED contains nontrivial YANG model changes, that is called a major NED upgrade. The NED ID changes, and the first or second number in the NED version changes since NEDs follow the same versioning scheme as NSO. In this case, you cannot simply replace the package, as you would for a maintenance or patch NED release. Instead, you must load (add) the new NED package alongside the old one and perform the migration.
-
-Migration uses the `/ncs:devices/device/migrate` action to change the ned-id of a single device or a group of devices. It does not affect the actual network device, except possibly reading from it. So, the migration does not have to be performed as part of the package upgrade procedure described above but can be done later, during normal operations. The details are described in [NED Migration](../management/ned-administration.md#sec.ned_migration). Once the migration is complete, you can remove the old NED by performing another package upgrade, where you deinstall the old NED package. It can be done straight after the migration or as part of the next upgrade cycle.
diff --git a/administration/management/README.md b/administration/management/README.md
deleted file mode 100644
index 49e72dfa..00000000
--- a/administration/management/README.md
+++ /dev/null
@@ -1,7 +0,0 @@
----
-description: Perform system management tasks on your NSO deployment.
-icon: folder-gear
----
-
-# Management
-
diff --git a/administration/management/aaa-infrastructure.md b/administration/management/aaa-infrastructure.md
deleted file mode 100644
index fe2c53ba..00000000
--- a/administration/management/aaa-infrastructure.md
+++ /dev/null
@@ -1,1473 +0,0 @@
----
-description: >-
- Manage user authentication, authorization, and audit using NSO's AAA
- mechanism.
----
-
-# AAA Infrastructure
-
-## The Problem
-
-Users log into NSO through the CLI, NETCONF, RESTCONF, SNMP, or via the Web UI. In either case, users need to be authenticated. That is, a user needs to present credentials, such as a password or a public key to gain access. As an alternative, for RESTCONF, users can be authenticated via token validation.
-
-Once a user is authenticated, all operations performed by that user need to be authorized. That is, certain users may be allowed to perform certain tasks, whereas others are not. This is called authorization. We differentiate between the authorization of commands and the authorization of data access.
-
-## Structure - Data Models
-
-The NSO daemon manages device configuration including AAA information. NSO manages AAA information as well as uses it. The AAA information describes which users may log in, what passwords they have, and what they are allowed to do. This is solved in NSO by requiring a data model to be both loaded and populated with data. NSO uses the YANG module `tailf-aaa.yang` for authentication, while `ietf-netconf-acm.yang` (NETCONF Access Control Model (NACM), [RFC 8341](https://tools.ietf.org/html/rfc8341)) as augmented by `tailf-acm.yang` is used for group assignment and authorization.
-
-### Data Model Contents
-
-The NACM data model is targeted specifically towards access control for NETCONF operations and thus lacks some functionality that is needed in NSO, in particular, support for the authorization of CLI commands and the possibility to specify the context (NETCONF, CLI, etc.) that a given authorization rule should apply to. This functionality is modeled by augmentation of the NACM model, as defined in the `tailf-acm.yang` YANG module.
-
-The `ietf-netconf-acm.yang` and `tailf-acm.yang` modules can be found in `$NCS_DIR/src/ncs/yang` directory in the release, while `tailf-aaa.yang` can be found in the `$NCS_DIR/src/ncs/aaa` directory.
-
-NACM options related to services are modeled by augmentation of the NACM model, as defined in the `tailf-ncs-acm.yang` YANG module. The `tailf-ncs-acm.yang` can be found in `$NCS_DIR/src/ncs/yang` directory in the release.
-
-The complete AAA data model defines a set of users, a set of groups, and a set of rules. The data model must be populated with data that is subsequently used by by NSO itself when it authenticates users and authorizes user data access. These YANG modules work exactly like all other `fxs` files loaded into the system with the exception that NSO itself uses them. The data belongs to the application, but NSO itself is the user of the data.
-
-Since NSO requires a data model for the AAA information for its operation, it will report an error and fail to start if these data models cannot be found.
-
-## AAA-related Items in `ncs.conf`
-
-NSO itself is configured through a configuration file - `ncs.conf`. In that file, we have the following items related to authentication and authorization:
-
-* `/ncs-config/aaa/ssh-server-key-dir`: If SSH termination is enabled for NETCONF or the CLI, the NSO built-in SSH server needs to have server keys. These keys are generated by the NSO install script and by default end up in `$NCS_DIR/etc/ncs/ssh`.\
- \
- It is also possible to use OpenSSH to terminate NETCONF or the CLI. If OpenSSH is used to terminate SSH traffic, this setting has no effect.
-* `/ncs-config/aaa/ssh-pubkey-authentication`: If SSH termination is enabled for NETCONF or the CLI, this item controls how the NSO SSH daemon locates the user keys for public key authentication. See [Public Key Login](aaa-infrastructure.md#ug.aaa.public_key_login) for details.
-* `/ncs-config/aaa/local-authentication/enabled`: The term 'local user' refers to a user stored under `/aaa/authentication/users`. The alternative is a user unknown to NSO, typically authenticated by PAM. By default, NSO first checks local users before trying PAM or external authentication.\
- \
- Local authentication is practical in test environments. It is also useful when we want to have one set of users that are allowed to log in to the host with normal shell access and another set of users that are only allowed to access the system using the normal encrypted, fully authenticated, northbound interfaces of NSO.\
- \
- If we always authenticate users through PAM, it may make sense to set this configurable to `false`. If we disable local authentication, it implicitly means that we must use either PAM authentication or external authentication. It also means that we can leave the entire data trees under `/aaa/authentication/users` and, in the case of external authentication, also `/nacm/groups` (for NACM) or `/aaa/authentication/groups` (for legacy tailf-aaa) empty.
-* `/ncs-config/aaa/pam`: NSO can authenticate users using PAM (Pluggable Authentication Modules). PAM is an integral part of most Unix-like systems.\
- \
- PAM is a complicated - albeit powerful - subsystem. It may be easier to have all users stored locally on the host, However, if we want to store users in a central location, PAM can be used to access the remote information. PAM can be configured to perform most login scenarios including RADIUS and LDAP. One major drawback with PAM authentication is that there is no easy way to extract the group information from PAM. PAM authenticates users, it does not also assign a user to a set of groups. PAM authentication is thoroughly described later in this chapter.
-* `/ncs-config/aaa/default-group`: If this configuration parameter is defined and if the group of a user cannot be determined, a logged-in user ends up in the given default group.
-* `/ncs-config/aaa/external-authentication`: NSO can authenticate users using an external executable. This is further described later in [External Authentication](aaa-infrastructure.md#ug.aaa.external_authentication). As an alternative, you may consider using package authentication.
-* `/ncs-config/aaa/external-validation`: NSO can authenticate users by validation of tokens using an external executable. This is further described later in [External Token Validation](aaa-infrastructure.md#ug.aaa.external_validation). Where external authentication uses a username and password to authenticate a user, external validation uses a token. The validation script should use the token to authenticate a user and can, optionally, also return a new token to be returned with the result of the request. It is currently only supported for RESTCONF.
-* `/ncs-config/aaa/external-challenge`: NSO has support for multi-factor authentication by sending challenges to a user. Challenges may be sent from any of the external authentication mechanisms but are currently only supported by JSON-RPC and CLI over SSH. This is further described later in [External Multi-factor Authentication](aaa-infrastructure.md#ug.aaa.external_challenge).
-* `/ncs-config/aaa/package-authentication`: NSO can authenticate users using package authentication. It extends the concept of external authentication by allowing multiple packages to be used for authentication instead of a single executable. This is further described in [Package Authentication](aaa-infrastructure.md#ug.aaa.packageauth).
-* `/ncs-config/aaa/single-sign-on`: With this setting enabled, NSO invokes Package Authentication on all requests to HTTP endpoints with the `/sso` prefix. This way, Package Authentication packages that require custom endpoints can expose them under the `/sso` base route.\
- \
- For example, a SAMLv2 Single Sign-On (SSO) package needs to process requests to an AssertionConsumerService endpoint, such as `/sso/saml/acs`, and therefore requires enabling this setting.\
- \
- This is a valid authentication method for WEB UI and JSON-RPC interfaces and needs Package Authentication to be enabled as well.
-* `/ncs-config/aaa/single-sign-on/enable-automatic-redirect`: If only one Single Sign-On package is configured (a package with `single-sign-on-url` set in `package-meta-data.xml`) and also this setting is enabled, NSO automatically redirects all unauthenticated access attempts to the configured `single-sign-on-url`.
-
-## Authentication
-
-Depending on the northbound management protocol, when a user session is created in NSO, it may or may not be authenticated. If the session is not yet authenticated, NSO's AAA subsystem is used to perform authentication and authorization, as described below. If the session already has been authenticated, NSO's AAA assigns groups to the user as described in [Group Membership](aaa-infrastructure.md#ug.aaa.groups), and performs authorization, as described in [Authorization](aaa-infrastructure.md#ug.aaa.authorization).
-
-The authentication part of the data model can be found in `tailf-aaa.yang`:
-
-```yang
- container authentication {
- tailf:info "User management";
- container users {
- tailf:info "List of local users";
- list user {
- key name;
- leaf name {
- type string;
- tailf:info "Login name of the user";
- }
- leaf uid {
- type int32;
- mandatory true;
- tailf:info "User Identifier";
- }
- leaf gid {
- type int32;
- mandatory true;
- tailf:info "Group Identifier";
- }
- leaf password {
- type passwdStr;
- mandatory true;
- }
- leaf ssh_keydir {
- type string;
- mandatory true;
- tailf:info "Absolute path to directory where user's ssh keys
- may be found";
- }
- leaf homedir {
- type string;
- mandatory true;
- tailf:info "Absolute path to user's home directory";
- }
- }
- }
- }
-```
-
-AAA authentication is used in the following cases:
-
-* When the built-in SSH server is used for NETCONF and CLI sessions.
-* For Web UI sessions and REST access.
-* When the method `Maapi.Authenticate()` is used.
-
-NSO's AAA authentication is not used in the following cases:
-
-* When NETCONF uses an external SSH daemon, such as OpenSSH.
-
- \
- In this case, the NETCONF session is initiated using the program `netconf-subsys`, as described in [NETCONF Transport Protocols](../../development/core-concepts/northbound-apis/#ug.netconf_agent.transport) in Northbound APIs.
-* When NETCONF uses TCP, as described in [NETCONF Transport Protocols](../../development/core-concepts/northbound-apis/#ug.netconf_agent.transport) in Northbound APIs, e.g. through the command `netconf-console`.
-* When accessing the CLI by invoking the `ncs_cli`, e.g. through an external SSH daemon, such as OpenSSH, or a telnet daemon.\
- \
- An important special case here is when a user has shell access to the host and runs **ncs\_cli** from the shell. This command, as well as direct access to the IPC socket, allows for authentication bypass. It is crucial to consider this case for your deployment. If non-trusted users have shell access to the host, IPC access must be restricted. See [Authenticating IPC Access](aaa-infrastructure.md#authenticating-ipc-access).
-* When SNMP is used, SNMP has its own authentication mechanisms. See [NSO SNMP Agent](../../development/core-concepts/northbound-apis/#the-nso-snmp-agent) in Northbound APIs.
-* When the method `Maapi.startUserSession()` is used without a preceding call of `Maapi.authenticate()`.
-
-### Public Key Login
-
-When a user logs in over NETCONF or the CLI using the built-in SSH server, with a public key login, the procedure is as follows.
-
-The user presents a username in accordance with the SSH protocol. The SSH server consults the settings for `/ncs-config/aaa/ssh-pubkey-authentication` and `/ncs-config/aaa/local-authentication/enabled` .
-
-1. If `ssh-pubkey-authentication` is set to `local`, and the SSH keys in `/aaa/authentication/users/user{$USER}/ssh_keydir` match the keys presented by the user, authentication succeeds.
-2. Otherwise, if `ssh-pubkey-authentication` is set to `system`, `local-authentication` is enabled, and the SSH keys in `/aaa/authentication/users/user{$USER}/ssh_keydir` match the keys presented by the user, authentication succeeds.
-3. Otherwise, if `ssh-pubkey-authentication` is set to `system` and the user `/aaa/authentication/users/user{$USER}` does not exist, but the user does exist in the OS password database, the keys in the user's `$HOME/.ssh` directory are checked. If these keys match the keys presented by the user, authentication succeeds.
-4. Otherwise, authentication fails.
-
-In all cases the keys are expected to be stored in a file called `authorized_keys` (or `authorized_keys2` if `authorized_keys` does not exist), and in the native OpenSSH format (i.e. as generated by the OpenSSH `ssh-keygen` command). If authentication succeeds, the user's group membership is established as described in [Group Membership](aaa-infrastructure.md#ug.aaa.groups).
-
-This is exactly the same procedure that is used by the OpenSSH server with the exception that the built-in SSH server also may locate the directory containing the public keys for a specific user by consulting the `/aaa/authentication/users` tree.
-
-### **Setting up Public Key Login**
-
-We need to provide a directory where SSH keys are kept for a specific user and give the absolute path to this directory for the `/aaa/authentication/users/user/ssh_keydir` leaf. If a public key login is not desired at all for a user, the value of the `ssh_keydir` leaf should be set to `""`, i.e. the empty string. Similarly, if the directory does not contain any SSH keys, public key logins for that user will be disabled.
-
-The built-in SSH daemon supports DSA, RSA, and ED25519 keys. To generate and enable RSA keys of size 4096 bits for, say, user "bob", the following steps are required.
-
-On the client machine, as user "bob", generate a private/public key pair as:
-
-```bash
-# ssh-keygen -b 4096 -t rsa
-Generating public/private rsa key pair.
-Enter file in which to save the key (/home/bob/.ssh/id_rsa):
-Created directory '/home/bob/.ssh'.
-Enter passphrase (empty for no passphrase):
-Enter same passphrase again:
-Your identification has been saved in /home/bob/.ssh/id_rsa.
-Your public key has been saved in /home/bob/.ssh/id_rsa.pub.
-The key fingerprint is:
-ce:1b:63:0a:f9:d4:1d:04:7a:1d:98:0c:99:66:57:65 bob@buzz
-# ls -lt ~/.ssh
-total 8
--rw------- 1 bob users 3247 Apr 4 12:28 id_rsa
--rw-r--r-- 1 bob users 738 Apr 4 12:28 id_rsa.pub
-```
-
-Now we need to copy the public key to the target machine where the NETCONF or CLI SSH client runs.
-
-Assume we have the following user entry:
-
-```xml
-
- bob
- 100
- 10
- $1$feedbabe$nGlMYlZpQ0bzenyFOQI3L1
- /var/system/users/bob/.ssh
- /var/system/users/bob
-
-```
-
-We need to copy the newly generated file `id_rsa.pub`, which is the public key, to a file on the target machine called `/var/system/users/bob/.ssh/authorized_keys`.
-
-{% hint style="info" %}
-Since the release of [OpenSSH 7.0](https://www.openssh.com/txt/release-7.0), support of `ssh-dss` host and user keys is disabled by default. If you want to continue using these, you may re-enable it using the following options for OpenSSH client:
-
-```
-HostKeyAlgorithms=+ssh-dss
-PubkeyAcceptedKeyTypes=+ssh-dss
-```
-
-You can find full instructions at [OpenSSH Legacy Options](https://www.openssh.com/legacy.html) webpage.
-{% endhint %}
-
-### Password Login
-
-Password login is triggered in the following cases:
-
-* When a user logs in over NETCONF or the CLI using the built-in SSH server, with a password. The user presents a username and a password in accordance with the SSH protocol.
-* When a user logs in using the Web UI. The Web UI asks for a username and password.
-* When the method `Maapi.authenticate()` is used.
-
-In this case, NSO will by default try local authentication, PAM, external authentication, and package authentication in that order, as described below. It is possible to change the order in which these are tried, by modifying the `ncs.conf`. parameter `/ncs-config/aaa/auth-order`. See [ncs.conf(5)](../../resources/man/ncs.conf.5.md) in Manual Pages for details.
-
-1. If `/aaa/authentication/users/user{$USER}` exists and the presented password matches the encrypted password in `/aaa/authentication/users/user{$USER}/password`, the user is authenticated.
-2. If the password does not match or if the user does not exist in `/aaa/authentication/users`, PAM login is attempted, if enabled. See [PAM](aaa-infrastructure.md#ug.aaa.pam) for details.
-3. If all of the above fails and external authentication is enabled, the configured executable is invoked. See [External Authentication](aaa-infrastructure.md#ug.aaa.external_authentication) for details.
-
-If authentication succeeds, the user's group membership is established as described in [Group Membership](aaa-infrastructure.md#ug.aaa.groups).
-
-### PAM
-
-On operating systems supporting PAM, NSO also supports PAM authentication. Using PAM, authentication with NSO can be very convenient since it allows us to have the same set of users and groups having access to NSO as those that have access to the UNIX/Linux host itself.
-
-{% hint style="info" %}
-PAM is the recommended way to authenticate NSO users.
-{% endhint %}
-
-If we use PAM, we do not have to have any users or any groups configured in the NSO aaa namespace at all.
-
-To configure PAM we typically need to do the following:
-
-1. Remove all users and groups from the AAA initialization XML file.
-2. Enable PAM in `ncs.conf` by adding the following to the AAA section in `ncs.conf`. The `service` name specifies the PAM service, typically a file in the directory `/etc/pam.d`, but may alternatively, be an entry in a file `/etc/pam.conf` depending on OS and version. Thus, it is possible to have a different login procedure for NSO than for the host itself.
-
- ```xml
-
- true
- common-auth
-
- ```
-3. If PAM is enabled and we want to use PAM for login, the system may have to run as `root`. This depends on how PAM is configured locally. However, the default system authentication will typically require `root`, since the PAM libraries then read `/etc/shadow`. If we don't want to run NSO as root, the solution here is to change the owner of a helper program called `$NCS_DIR/lib/ncs/lib/pam-*/priv/epam` and also set the `setuid` bit.
-
- ```bash
- # cd $NCS_DIR/lib/ncs/lib/pam-*/priv/
- # chown root:root epam
- # chmod u+s epam
- ```
-
-As an example, say that we have a user test in `/etc/passwd`, and furthermore:
-
-```bash
-# grep test /etc/group
-operator:x:37:test
-admin:x:1001:test
-```
-
-Thus, the `test` user is part of the `admin` and the `operator` groups and logging in to NSO as the `test` user through CLI SSH, Web UI, or NETCONF, renders the following in the audit log.
-
-```
- 28-Jan-2009::16:05:55.663 buzz ncs[14658]: audit user: test/0 logged
- in over ssh from 127.0.0.1 with authmeth:password
- 28-Jan-2009::16:05:55.670 buzz ncs[14658]: audit user: test/5 assigned
- to groups: operator,admin
- 28-Jan-2009::16:05:57.655 buzz ncs[14658]: audit user: test/5 CLI 'exit'
-```
-
-Thus, the `test` user was found and authenticated from `/etc/passwd`, and the crucial group assignment of the test user was done from `/etc/group`.
-
-If we wish to be able to also manipulate the users, their passwords, etc on the device, we can write a private YANG model for that data, store that data in CDB, set up a normal CDB subscriber for that data, and finally when our private user data is manipulated, our CDB subscriber picks up the changes and changes the contents of the relevant `/etc` files.
-
-### External Authentication
-
-A common situation is when we wish to have all authentication data stored remotely, not locally, for example on a remote RADIUS or LDAP server. This remote authentication server typically not only stores the users and their passwords but also the group information.
-
-If we wish to have not only the users but also the group information stored on a remote server, the best option for NSO authentication is to use external authentication.
-
-If this feature is configured, NSO will invoke the executable configured in `/ncs-config/aaa/external-authentication/executable` in `ncs.conf` , and pass the username and the clear text password on `stdin` using the string notation: `"[user;password;]\n"`.
-
-For example, if the user `bob` attempts to log in over SSH using the password 'secret', and external authentication is enabled, NSO will invoke the configured executable and write `"[bob;secret;]\n"` on the `stdin` stream for the executable. The task of the executable is then to authenticate the user and also establish the username-to-groups mapping.
-
-For example, the executable could be a RADIUS client which utilizes some proprietary vendor attributes to retrieve the groups of the user from the RADIUS server. If authentication is successful, the program should write `accept` followed by a space-separated list of groups that the user is a member of, and additional information as described below. Again, assuming that bob's password indeed was 'secret', and that bob is a member of the `admin` and the `lamers` groups, the program should write `accept admin lamers $uid $gid $supplementary_gids $HOME` on its standard output and then exit.
-
-{% hint style="info" %}
-There is a general limit of 16000 bytes of output from the `externalauth` program.
-{% endhint %}
-
-Thus, the format of the output from an `externalauth` program when authentication is successful should be:
-
-**`"accept $groups $uid $gid $supplementary_gids $HOME\n"`**
-
-Where:
-
-* `$groups` is a space-separated list of the group names the user is a member of.
-* `$uid` is the UNIX integer user ID that NSO should use as a default when executing commands for this user.
-* `$gid` is the UNIX integer group ID that NSO should use as a default when executing commands for this user.
-* `$supplementary_gids` is a (possibly empty) space-separated list of additional UNIX group IDs the user is also a member of.
-* `$HOME` is the directory that should be used as HOME for this user when NSO executes commands on behalf of this user.
-
-It is further possible for the program to return a token on successful authentication, by using `"accept_token"` instead of `"accept"`:
-
-**`"accept_token $groups $uid $gid $supplementary_gids $HOME $token\n"`**
-
-Where:
-
-* `$token` is an arbitrary string. NSO will then, for some northbound interfaces, include this token in responses.
-
-It is also possible for the program to return additional information on successful authentication, by using `"accept_info"` instead of `"accept"`:
-
-**`"accept_info $groups $uid $gid $supplementary_gids $HOME $info\n"`**
-
-Where:
-
-* `$info` is some arbitrary text. NSO will then just append this text to the generated audit log message (CONFD\_EXT\_LOGIN).
-
-Yet another possibility is for the program to return a warning that the user's password is about to expire, by using `"accept_warning"` instead of `"accept"`:
-
-**`"accept_warning $groups $uid $gid $supplementary_gids $HOME $warning\n"`**
-
-Where:
-
-* `$warning` is an appropriate warning message. The message will be processed by NSO according to the setting of `/ncs-config/aaa/expiration-warning` in `ncs.conf`.
-
-There is also support for token variations of `"accept_info"` and `"accept_warning"` namely `"accept_token_info"` and `"accept_token_warning"`. Both `"accept_token_info"` and `"accept_token_warning"` expect the external program to output exactly the same as described above with the addition of a token after `$HOME`:
-
-* `"accept_token_info $groups $uid $gid $supplementary_gids $HOME $token $info\n"`
-* `"accept_token_warning $groups $uid $gid $supplementary_gids $HOME $token $warning\n"`
-
-If authentication failed, the program should write `"reject"` or `"abort"`, possibly followed by a reason for the rejection, and a trailing newline. For example, `"reject Bad password\n"` or just `"abort\n"`. The difference between `"reject"` and `"abort"` is that with `"reject"`, NSO will try subsequent mechanisms configured for `/ncs-config/aaa/auth-order` in `ncs.conf` (if any), while with `"abort"`, the authentication fails immediately. Thus `"abort"` can prevent subsequent mechanisms from being tried, but when external authentication is the last mechanism (as in the default order), it has the same effect as `"reject"`.
-
-Supported by some northbound APIs, such as JSON-RPC and CLI over SSH, the external authentication may also choose to issue a challenge:
-
-`"challenge $challenge-id $challenge-prompt\n"`
-
-{% hint style="info" %}
-The challenge-prompt may be multi-line, why it must be base64 encoded.
-{% endhint %}
-
-For more information on multi-factor authentication, see [External Multi-Factor Authentication](aaa-infrastructure.md#ug.aaa.external_challenge).
-
-When external authentication is used, the group list returned by the external program is prepended by any possible group information stored locally under the `/aaa` tree. Hence when we use external authentication it is indeed possible to have the entire `/aaa/authentication` tree empty. The group assignment performed by the external program will still be valid and the relevant groups will be used by NSO when the authorization rules are checked.
-
-### External Token Validation
-
-When username and password authentication is not feasible, authentication by token validation is possible. Currently, only RESTCONF supports this mode of authentication. It shares all properties of external authentication, but instead of a username and password, it takes a token as input. The output is also almost the same, the only difference is that it is also expected to output a username.
-
-If this feature is configured, NSO will invoke the executable configured in `/ncs-config/aaa/external-validation/executable` in `ncs.conf` , and pass the token on `stdin` using the string notation: `"[token;]\n"`.
-
-For example if the user `bob` attempts to log over RESTCONF using the token `topsecret`, and external validation is enabled, NSO will invoke the configured executable and write `"[topsecret;]\n"` on the `stdin` stream for the executable.
-
-The task of the executable is then to validate the token, thereby authenticating the user and also establishing the username and username-to-groups mapping.
-
-For example, the executable could be a FUSION client that utilizes some proprietary vendor attributes to retrieve the username and groups of the user from the FUSION server. If token validation is successful, the program should write `accept` followed by a space-separated list of groups that the user is a member of, and additional information as described below. Again, assuming that `bob`'s token indeed was `topsecret`, and that `bob` is a member of the `admin` and the `lamers` groups, the program should write `accept admin lamers $uid $gid $supplementary_gids $HOME $USER` on its standard output and then exit.
-
-{% hint style="info" %}
-There is a general limit of 16000 bytes of output from the `externalvalidation` program.
-{% endhint %}
-
-Thus the format of the output from an `externalvalidation` program when token validation authentication is successful should be:
-
-`"accept $groups $uid $gid $supplementary_gids $HOME $USER\n"`
-
-Where:
-
-* `$groups` is a space-separated list of the group names the user is a member of.
-* `$uid` is the UNIX integer user ID NSO should use as a default when executing commands for this user.
-* `$gid` is the UNIX integer group ID NSO should use as a default when executing commands for this user.
-* `$supplementary_gids` is a (possibly empty) space-separated list of additional UNIX group IDs the user is also a member of.
-* `$HOME` is the directory that should be used as HOME for this user when NSO executes commands on behalf of this user.
-* `$USER` is the user derived from mapping the token.
-
-It is further possible for the program to return a new token on successful token validation authentication, by using `"accept_token"` instead of `"accept"`:
-
-`"accept_token $groups $uid $gid $supplementary_gids $HOME $USER $token\n"`
-
-Where:
-
-* `$token` is an arbitrary string. NSO will then, for some northbound interfaces, include this token in responses.
-
-It is also possible for the program to return additional information on successful token validation authentication, by using `"accept_info"` instead of `"accept"`:
-
-`"accept_info $groups $uid $gid $supplementary_gids $HOME $USER $info\n"`
-
-Where:
-
-* `$info` is some arbitrary text. NSO will then just append this text to the generated audit log message (CONFD\_EXT\_LOGIN).
-
-Yet another possibility is for the program to return a warning that the user's password is about to expire, by using `"accept_warning"` instead of `"accept"`:
-
-`"accept_warning $groups $uid $gid $supplementary_gids $HOME $USER $warning\n"`
-
-Where:
-
-* `$warning` is an appropriate warning message. The message will be processed by NSO according to the setting of `/ncs-config/aaa/expiration-warning` in `ncs.conf`.
-
-There is also support for token variations of `"accept_info"` and `"accept_warning"` namely `"accept_token_info"` and `"accept_token_warning"`. Both `"accept_token_info"` and `"accept_token_warning"` expect the external program to output exactly the same as described above with the addition of a token after `$USER`:
-
-`"accept_token_info $groups $uid $gid $supplementary_gids $HOME $USER $token $info\n"`
-
-`"accept_token_warning $groups $uid $gid $supplementary_gids $HOME $USER $token $warning\n"`
-
-If token validation authentication fails, the program should write `"reject"` or `"abort"`, possibly followed by a reason for the rejection and a trailing newline. For example `"reject Bad password\n"` or just `"abort\n"`. The difference between `"reject"` and `"abort"` is that with `"reject"`, NSO will try subsequent mechanisms configured for `/ncs-config/aaa/validation-order` in `ncs.conf` (if any), while with `"abort"`, the token validation authentication fails immediately. Thus `"abort"` can prevent subsequent mechanisms from being tried. Currently, the only available token validation authentication mechanism is the external one.
-
-Supported by some northbound APIs, such as JSON-RPC and CLI over SSH, the external validation may also choose to issue a challenge:
-
-`"challenge $challenge-id $challenge-prompt\n"`
-
-{% hint style="info" %}
-The challenge prompt may be multi-line, why it must be base64 encoded.
-{% endhint %}
-
-For more information on multi-factor authentication, see [External Multi-Factor Authentication](aaa-infrastructure.md#ug.aaa.external_challenge).
-
-### External Multi-Factor Authentication
-
-When username, password, or token authentication is not enough, a challenge may be sent from any of the external authentication mechanisms to the user. A challenge consists of a challenge ID and a base64 encoded challenge prompt, and a user is supposed to send a response to the challenge. Currently, only JSONRPC and CLI over SSH support multi-factor authentication. Responses to challenges of multi-factor authentication have the same output as the token authentication mechanism.
-
-If this feature is configured, NSO will invoke the executable configured in `/ncs-config/aaa/external-challenge/executable` in `ncs.conf` , and pass the challenge ID and response on `stdin` using the string notation: `"[challenge-id;response;]\n"`.
-
-For example, a user `bob` has received a challenge from external authentication, external validation, or external challenge and then attempts to log in over JSON-RPC with a response to the challenge using challenge ID `"22efa",response:"ae457b"`. The external challenge mechanism is enabled, NSO will invoke the configured executable and write `"[22efa;ae457b;]\n"` on the `stdin` stream for the executable.
-
-The task of the executable is then to validate the challenge ID, and response combination, thereby authenticating the user and also establishing the username and username-to-groups mapping.
-
-For example, the executable could be a RADIUS client which utilizes some proprietary vendor attributes to retrieve the username and groups of the user from the RADIUS server. If challenge ID, response validation is successful, the program should write `"accept "` followed by a space-separated list of groups the user is a member of, and additional information as described below. Again, assuming that `bob`'s challenge ID, the response combination indeed was `"22efa", "ae457b"` and that `bob` is a member of the `admin` and the `lamers` groups, the program should write `"accept admin lamers $uid $gid $supplementary_gids $HOME $USER\n"` on its standard output and then exit.
-
-{% hint style="info" %}
-There is a general limit of 16000 bytes of output from the `externalchallenge` program.
-{% endhint %}
-
-Thus the format of the output from an `externalchallenge` program when challenge-based authentication is successful should be:
-
-`"accept $groups $uid $gid $supplementary_gids $HOME $USER\n"`
-
-Where:
-
-* `$groups` is a space-separated list of the group names the user is a member of.
-* `$uid` is the UNIX integer user ID NSO should use as a default when executing commands for this user.
-* `$gid` is the UNIX integer group ID NSO should use as a default when executing commands for this user.
-* `$supplementary_gids` is a (possibly empty) space-separated list of additional UNIX group IDs the user is also a member of.
-* `$HOME` is the directory that should be used as HOME for this user when NSO executes commands on behalf of this user.
-* `$USER` is the user derived from mapping the challenge ID, response.
-
-It is further possible for the program to return a token on successful authentication, by using `"accept_token"` instead of `"accept"`:
-
-`"accept_token $groups $uid $gid $supplementary_gids $HOME $USER $token\n"`
-
-Where:
-
-* `$token` is an arbitrary string. NSO will then, for some northbound interfaces, include this token in responses.
-
-It is also possible for the program to return additional information on successful authentication, by using `"accept_info"` instead of `"accept"`:
-
-`"accept_info $groups $uid $gid $supplementary_gids $HOME $USER $info\n"`
-
-Where:
-
-* `$info` is some arbitrary text. NSO will then just append this text to the generated audit log message (CONFD\_EXT\_LOGIN).
-
-Yet another possibility is for the program to return a warning that the user's password is about to expire, by using `"accept_warning"` instead of `"accept"`:
-
-`"accept_warning $groups $uid $gid $supplementary_gids $HOME $USER $warning\n"`
-
-Where:
-
-* `$warning` is an appropriate warning message. The message will be processed by NSO according to the setting of `/ncs-config/aaa/expiration-warning` in `ncs.conf`.
-
-There is also support for token variations of `"accept_info"` and `"accept_warning"` namely `"accept_token_info"` and `"accept_token_warning"`. Both `"accept_token_info"` and `"accept_token_warning"` expects the external program to output exactly the same as described above with the addition of a token after `$USER`:
-
-`"accept_token_info $groups $uid $gid $supplementary_gids $HOME $USER $token $info\n"`
-
-`"accept_token_warning $groups $uid $gid $supplementary_gids $HOME $USER $token $warning\n"`
-
-If authentication fails, the program should write `"reject"` or `"abort"`, possibly followed by a reason for the rejection and a trailing newline. For example `"reject Bad challenge response\n"` or just `"abort\n"`. The difference between `"reject"` and `"abort"` is that with `"reject"`, NSO will try subsequent mechanisms configured for `/ncs-config/aaa/challenge-order` in `ncs.conf` (if any), while with `"abort"`, the challenge-response authentication fails immediately. Thus `"abort"` can prevent subsequent mechanisms from being tried. Currently, the only available challenge-response authentication mechanism is the external one.
-
-Supported by some northbound APIs, such as JSON-RPC and CLI over SSH, the external challenge may also choose to issue a new challenge:
-
-`"challenge $challenge-id $challenge-prompt\n"`
-
-{% hint style="info" %}
-The challenge prompt may be multi-line, so it must be base64 encoded.
-{% endhint %}
-
-{% hint style="info" %}
-Note that when using challenges with the CLI over SSH, the `/ncs-config/cli/ssh/use-keyboard-interactive>` need to be set to true for the challenges to be sent correctly to the client.
-{% endhint %}
-
-{% hint style="info" %}
-The configuration of the SSH client used may need to be given the option to allow a higher number of allowed number of password prompts, e.g. `-o NumberOfPasswordPrompts`, else the default number may introduce an unexpected behavior when the client is presented with multiple challenges.
-{% endhint %}
-
-### Package Authentication
-
-The Package Authentication functionality allows for packages to handle the NSO authentication in a customized fashion. Authentication data can e.g. be stored remotely, and a script in the package is used to communicate with the remote system.
-
-Compared to external authentication, the Package Authentication mechanism allows specifying multiple packages to be invoked in the order they appear in the configuration. NSO provides implementations for LDAP, SAMLv2, and TACACS+ protocols with packages available in `$NCS_DIR/packages/auth/`. Additionally, you can implement your own authentication packages as detailed below.
-
-Authentication packages are NSO packages with the required content of an executable file `scripts/authenticate`. This executable basically follows the same API, and limitations, as the external auth script, but with a different input format and some additional functionality. Other than these requirements, it is possible to customize the package arbitrarily.
-
-{% hint style="info" %}
-Package authentication is supported for Single Sign-On (see [Single Sign-On](../../development/advanced-development/web-ui-development/#single-sign-on-sso) in Web UI), JSON-RPC, and RESTCONF. Note that Single Sign-On and (non-batch) JSON-RPC allow all functionality while the RESTCONF interface will treat anything other than a "`accept_username`" reply from the package as if authentication failed!
-{% endhint %}
-
-Package authentication is enabled by setting the `ncs.conf` options `/ncs-config/aaa/package-authentication/enabled` to true, and adding the package by name in the `/ncs-config/aaa/package-authentication/packages` list. The order of the configured packages is the order that the packages will be used when attempting to authenticate a user. See [ncs.conf(5)](../../resources/man/ncs.conf.5.md) in Manual Pages for details.
-
-If this feature is configured in `ncs.conf`, NSO will for each configured package invoke `script/authenticate`, and pass username, password, and original HTTP request (i.e. the user-supplied `next` query parameter), HTTP request, HTTP headers, HTTP body, client source IP, client source port, northbound API context, and protocol on `stdin` using the string notation: `"[user;password;orig_request;request;headers;body;src-ip;src-port;ctx;proto;]\n"`.
-
-{% hint style="info" %}
-The fields user, password, orig\_request, request, headers, and body are all base64 encoded.
-{% endhint %}
-
-{% hint style="info" %}
-If the body length exceeds the `partial_post_size` of the RESTCONF server, the body passed to the authenticate script will only contain the string `'==nso_package_authentication_partial_body==`'.
-{% endhint %}
-
-{% hint style="info" %}
-The original request will be prefixed with the string `==nso_package_authentication_next==` before the base64 encoded part. This means supplying the `next` query parameter value `/my-location` will pass the following string to the authentication script: `==nso_package_authentication_next==L215LWxvY2F0aW9u`.
-{% endhint %}
-
-For example, if an unauthenticated user attempts to start a single sign-on process over northbound HTTP-based APIs with the cisco-nso-saml2-auth package, package authentication is enabled and configured with packages, and also single sign-on is enabled, NSO will, for each configured package, invoke the executable `scripts/authenticate` and write `"[;;;R0VUIC9zc28vc2FtbC9sb2dpbi8gSFRUUC8xLjE=;;;127.0.0.1;59226;webui;https;]\n"`. on the `stdin` stream for the executable.
-
-For clarity, the base64 decoded contents sent to `stdin` presented: `"[;;;GET /sso/saml/login/ HTTP/1.1;;;127.0.0.1;54321;webui;https;]\n"`.
-
-The task of the package is then to authenticate the user and also establish the username-to-groups mapping.
-
-For example, the package could support a SAMLv2 authentication protocol which communicates with an Identity Provider (IdP) for authentication. If authentication is successful, the program should write either `"accept"`, or `"accept_username"`, depending on whether the authentication is started with a username or if an external entity handles the entire authentication and supplies the username for a successful authentication. (SAMLv2 uses `accept_username`, since the IdP handles the entire authentication.) The "accept\_username " is followed by a username and then followed by a space-separated list of groups the user is a member of, and additional information as described below. If authentication is successful and the authenticated user `bob` is a member of the groups `admin` and `wheel`, the program should write `"accept_username bob admin wheel 1000 1000 100 /home/bob\n"` on its standard output and then exit.
-
-{% hint style="info" %}
-There is a general limit of 16000 bytes of output from the "packageauth" program.
-{% endhint %}
-
-Thus the format of the output from a `packageauth` program when authentication is successful should be either the same as from `externalauth` (see [External Authentication](aaa-infrastructure.md#ug.aaa.external_authentication)) or the following:
-
-`"accept_username $USER $groups $uid $gid $supplementary_gids $HOME\n"`
-
-Where:
-
-* `$USER` is the user derived during the execution of the "packageauth" program.
-* `$groups` is a space-separated list of the group names the user is a member of.
-* `$uid` is the UNIX integer user ID NSO should use as a default when executing commands for this user.
-* `$gid` is the UNIX integer group ID NSO should use as a default when executing commands for this user.
-* `$supplementary_gids` is a (possibly empty) space-separated list of additional UNIX group IDs the user is also a member of.
-* `$HOME` is the directory that should be used as HOME for this user when NSO executes commands on behalf of this user.
-
-In addition to the `externalauth` API, the authentication packages can also return the following responses:
-
-* `unknown '`_`reason`_`'` - (_`reason`_ being plain-text) if they can't handle authentication for the supplied input.
-* `redirect '`_`url`_`'` - (_`url`_ being base64 encoded) for an HTTP redirect.
-* `content '`_`content-type`_`' '`_`content`_`'` - (_`content-type`_ being plain-text mime-type and _`content`_ being base64 encoded) to relay supplied content.
-* `accept_username_redirect url $USER $groups $uid $gid $supplementary_gids $HOME` - which combines the `accept_username` and `redirect`.
-
-It is also possible for the program to return additional information on successful authentication, by using `"accept_info"` instead of `"accept"`:
-
-`"accept_info $groups $uid $gid $supplementary_gids $HOME $info\n"`
-
-Where:
-
-* `$info` is some arbitrary text. NSO will then just append this text to the generated audit log message (NCS\_PACKAGE\_AUTH\_SUCCESS).
-
-Yet another possibility is for the program to return a warning that the user's password is about to expire, by using `"accept_warning"` instead of `"accept"`:
-
-`"accept_warning $groups $uid $gid $supplementary_gids $HOME $warning\n"`
-
-Where:
-
-* `$warning` is an appropriate warning message. The message will be processed by NSO according to the setting of `/ncs-config/aaa/expiration-warning` in `ncs.conf`.
-
-If authentication fails, the program should write `"reject"` or `"abort"`, possibly followed by a reason for the rejection and a trailing newline. For example `"reject 'Bad password'\n"` or just `"abort\n"`. The difference between `"reject"` and `"abort"` is that with `"reject"`, NSO will try subsequent mechanisms configured for `/ncs-config/aaa/auth-order`, and packages configured for `/ncs-config/aaa/package-authentication/packages` in `ncs.conf` (if any), while with `"abort"`, the authentication fails immediately. Thus `"abort"` can prevent subsequent mechanisms from being tried, but when external authentication is the last mechanism (as in the default order), it has the same effect as `"reject"`.
-
-When package authentication is used, the group list returned by the package executable is prepended by any possible group information stored locally under the `/aaa` tree. Hence when package authentication is used, it is indeed possible to have the entire `/aaa/authentication` tree empty. The group assignment performed by the external program will still be valid and the relevant groups will be used by NSO when the authorization rules are checked.
-
-### **Username/Password Package Authentication for CLI**
-
-Package authentication will invoke the `scripts/authenticate` when a user tries to authenticate using CLI. In this case, only the username, password, client source IP, client source port, northbound API context, and protocol will be passed to the script.
-
-{% hint style="info" %}
-When serving a username/password request, script output other than accept, challenge or abort will be treated as if authentication failed.
-{% endhint %}
-
-### **Package Challenges**
-
-When this is enabled, `/ncs-config/aaa/package-authentication/package-challenge/enabled` is set to true, packages will also be used to try to resolve challenges sent to the server and are only supported by CLI over SSH. The script `script/challenge` will be invoked passing challenge ID, response, client source IP, client source port, northbound API context, and protocol on `stdin` using the string notation: `"[challengeid;response;src-ip;src-port;ctx;proto;]\n"` . The output should follow that of the authenticate script.
-
-{% hint style="info" %}
-The fields `challengeid` and response are base64 encoded when passed to the script.
-{% endhint %}
-
-## Authenticating IPC Access
-
-NSO communicates with clients (Python and Java client libraries, `ncs_cli`, `netconf-subsys`, and others) using the NSO IPC socket. The protocol used allows the client to provide user and group information to use for authorization in NSO, effectively delegating authentication to the client.
-
-By default, only local connections to the IPC socket are allowed. If all local clients are considered trusted, the socket can provide unauthenticated access, with the client-supplied user name. This is what the `--user` option of `ncs_cli` does. For example, the following connects to NSO as user `admin`.
-
-```bash
-ncs_cli --user admin
-```
-
-The same is possible for the group. This unauthenticated access is currently the default.
-
-The main condition here is that all clients connecting to the socket are trusted to use the correct user and group information. That is often not the case, such as untrusted users having shell access to the host to run `ncs_cli` or otherwise initiate local connections to the IPC socket. Then access to the socket must be restricted.
-
-In general, authenticating access to the IPC socket is a security best practice and should always be used. When NSO is configured to use Unix domain sockets for IPC, it authenticates the client based on the UID of the other end of the socket connection. Alternatively, the system can be instructed to use TCP sockets. In this case, the system should be configured to use an access check, where every IPC client must prove that it has access to a pre-shared key. See [Restricting Access to the IPC Socket](../advanced-topics/ipc-connection.md#restricting-access-to-the-ipc-socket) on how to enable it.
-
-### UID-based Authentication for Unix Sockets
-
-NSO will use Unix domain sockets for IPC communications when `ncs-local-ipc/enabled` configuration in `ncs.conf` is set to true. The main benefit of this communication method is that it is generally more secure than TCP sockets. It also provides additional information on the communicating peer, such as the user ID of the calling process. NSO can then use this information to authenticate the peer.
-
-As part of the initial handshake, NSO reads the effective UID (euid) of the process initiating the Unix socket connection. The system then finds an `/aaa/authentication/users/user` entry with the corresponding `uid` value. Access is permitted or denied based on the `local_ipc_access` value. If access is permitted, the user connects as the user, found in the `/aaa/authentication/users/user` list. The following is an example of such a user list entry:
-
-```bash
-aaa authentication users user admin
- uid 500
- gid 500
- password $6$...
- ssh_keydir /var/ncs/homes/admin/.ssh
- homedir /var/ncs/homes/admin
- local_ipc_access true
-!
-```
-
-NSO will skip this access check in case the euid of the connecting process is 0 (root user) or same as the user NSO is running as. (In both these cases, the connecting user could access NSO data directly, bypassing the access check.)
-
-If using Unix socket IPC, clients and client libraries must now specify the path that identifies the socket. The path must match the one set under `ncs-local-ipc/path` in `ncs.conf`. Clients may expose a client-specific way to set it, such as the `-S` option of the `ncs_cli` command. Alternatively, you can use the `NCS_IPC_PATH` environment variable to specify the socket path independently of the used client.
-
-See [examples.ncs/aaa/ipc](https://github.com/NSO-developer/nso-examples/tree/6.6/aaa/ipc) for a working example.
-
-## Group Membership
-
-Once a user is authenticated, group membership must be established. A single user can be a member of several groups. Group membership is used by the authorization rules to decide which operations a certain user is allowed to perform. Thus, the NSO AAA authorization model is entirely group-based. This is also sometimes referred to as role-based authorization.
-
-All groups are stored under `/nacm/groups`, and each group contains a number of usernames. The `ietf-netconf-acm.yang` model defines a group entry:
-
-```yang
-list group {
- key name;
-
- description
- "One NACM Group Entry. This list will only contain
- configured entries, not any entries learned from
- any transport protocols.";
-
- leaf name {
- type group-name-type;
- description
- "Group name associated with this entry.";
- }
-
- leaf-list user-name {
- type user-name-type;
- description
- "Each entry identifies the username of
- a member of the group associated with
- this entry.";
- }
-}
-```
-
-The `tailf-acm.yang` model augments this with a `gid` leaf:
-
-```yang
-augment /nacm:nacm/nacm:groups/nacm:group {
- leaf gid {
- type int32;
- description
- "This leaf associates a numerical group ID with the group.
- When a OS command is executed on behalf of a user,
- supplementary group IDs are assigned based on 'gid' values
- for the groups that the use is a member of.";
- }
-}
-```
-
-A valid group entry could thus look like:
-
-```xml
-
- admin
- bob
- joe
- 99
-
-```
-
-The above XML data would then mean that users `bob` and `joe` are members of the `admin` group. The users need not necessarily exist as actual users under `/aaa/authentication/users` in order to belong to a group. If for example PAM authentication is used, it does not make sense to have all users listed under `/aaa/authentication/users`.
-
-By default, the user is assigned to groups by using any groups provided by the northbound transport (e.g. via the `ncs_cli` or `netconf-subsys` programs), by consulting data under `/nacm/groups`, by consulting the `/etc/group` file, and by using any additional groups supplied by the authentication method. If `/nacm/enable-external-groups` is set to "false", only the data under `/nacm/groups` is consulted.
-
-The resulting group assignment is the union of these methods, if it is non-empty. Otherwise, the default group is used, if configured ( `/ncs-config/aaa/default-group` in `ncs.conf`).
-
-A user entry has a UNIX uid and UNIX gid assigned to it. Groups may have optional group IDs. When a user is logged in, and NSO tries to execute commands on behalf of that user, the uid/gid for the command execution is taken from the user entry. Furthermore, UNIX supplementary group IDs are assigned according to the `gid`'s in the groups where the user is a member.
-
-## Authorization
-
-Once a user is authenticated and group membership is established, when the user starts to perform various actions, each action must be authorized. Normally the authorization is done based on rules configured in the AAA data model as described in this section.
-
-The authorization procedure first checks the value of `/nacm/enable-nacm`. This leaf has a default of `true`, but if it is set to `false`, all access is permitted. Otherwise, the next step is to traverse the `rule-list` list:
-
-```yang
-list rule-list {
- key "name";
- ordered-by user;
- description
- "An ordered collection of access control rules.";
-
- leaf name {
- type string {
- length "1..max";
- }
- description
- "Arbitrary name assigned to the rule-list.";
- }
- leaf-list group {
- type union {
- type matchall-string-type;
- type group-name-type;
- }
- description
- "List of administrative groups that will be
- assigned the associated access rights
- defined by the 'rule' list.
-
- The string '*' indicates that all groups apply to the
- entry.";
- }
-
- // ...
-}
-```
-
-If the `group` leaf-list in a `rule-list` entry matches any of the user's groups, the `cmdrule` list entries are examined for command authorization, while the `rule` entries are examined for RPC, notification, and data authorization.
-
-### Command Authorization
-
-The `tailf-acm.yang` module augments the `rule-list` entry in `ietf-netconf-acm.yang` with a `cmdrule` list:
-
-```yang
-augment /nacm:nacm/nacm:rule-list {
-
- list cmdrule {
- key "name";
- ordered-by user;
- description
- "One command access control rule. Command rules control access
- to CLI commands and Web UI functions.
-
- Rules are processed in user-defined order until a match is
- found. A rule matches if 'context', 'command', and
- 'access-operations' match the request. If a rule
- matches, the 'action' leaf determines if access is granted
- or not.";
-
- leaf name {
- type string {
- length "1..max";
- }
- description
- "Arbitrary name assigned to the rule.";
- }
-
- leaf context {
- type union {
- type nacm:matchall-string-type;
- type string;
- }
- default "*";
- description
- "This leaf matches if it has the value '*' or if its value
- identifies the agent that is requesting access, i.e. 'cli'
- for CLI or 'webui' for Web UI.";
- }
-
- leaf command {
- type string;
- default "*";
- description
- "Space-separated tokens representing the command. Refer
- to the Tail-f AAA documentation for further details.";
- }
-
- leaf access-operations {
- type union {
- type nacm:matchall-string-type;
- type nacm:access-operations-type;
- }
- default "*";
- description
- "Access operations associated with this rule.
-
- This leaf matches if it has the value '*' or if the
- bit corresponding to the requested operation is set.";
- }
-
- leaf action {
- type nacm:action-type;
- mandatory true;
- description
- "The access control action associated with the
- rule. If a rule is determined to match a
- particular request, then this object is used
- to determine whether to permit or deny the
- request.";
- }
-
- leaf log-if-permit {
- type empty;
- description
- "If this leaf is present, access granted due to this rule
- is logged in the developer log. Otherwise, only denied
- access is logged. Mainly intended for debugging of rules.";
- }
-
- leaf comment {
- type string;
- description
- "A textual description of the access rule.";
- }
- }
-}
-```
-
-Each rule has seven leafs. The first is the `name` list key, the following three leafs are matching leafs. When NSO tries to run a command, it tries to match the command towards the matching leafs and if all of `context`, `command`, and `access-operations` match, the fifth field, i.e. the `action`, is applied.
-
-* `name`: `name` is the name of the rule. The rules are checked in order, with the ordering given by the YANG `ordered-by user` semantics, i.e. independent of the key values.
-* `context`: `context` is either of the strings `cli`, `webui`, or `*` for a command rule. This means that we can differentiate authorization rules for which access method is used. Thus if command access is attempted through the CLI, the context will be the string `cli` whereas for operations via the Web UI, the context will be the string `webui`.
-* `command`: This is the actual command getting executed. If the rule applies to one or several CLI commands, the string is a space-separated list of CLI command tokens, for example `request system reboot`. If the command applies to Web UI operations, it is a space-separated string similar to a CLI string. A string that consists of just `*` matches any command.\
- \
- In general, we do not recommend using command rules to protect the configuration. Use rules for data access as described in the next section to control access to different parts of the data. Command rules should be used only for CLI commands and Web UI operations that cannot be expressed as data rules.\
- \
- The individual tokens can be POSIX extended regular expressions. Each regular expression is implicitly anchored, i.e. an `^` is prepended and a `$` is appended to the regular expression.
-* `access-operations`: `access-operations` is used to match the operation that NSO tries to perform. It must be one or both of the "read" and "exec" values from the `access-operations-type` bits type definition in `ietf-netconf-acm.yang`, or "\*" to match any operation.
-* action: If all of the previous fields match, the rule as a whole matches and the value of `action` will be taken. I.e. if a match is found, a decision is made whether to permit or deny the request in its entirety. If `action` is `permit`, the request is permitted, if `action` is `deny`, the request is denied and an entry is written to the developer log.
-* `log-if-permit`: If this leaf is present, an entry is written to the developer log for a matching request also when `action` is `permit`. This is very useful when debugging command rules.
-* `comment`: An optional textual description of the rule.
-
-For the rule processing to be written to the devel log, the `/ncs-config/logs/developer-log-level` entry in `ncs.conf` must be set to `trace`.
-
-If no matching rule is found in any of the `cmdrule` lists in any `rule-list` entry that matches the user's groups, this augmentation from `tailf-acm.yang` is relevant:
-
-```yang
-augment /nacm:nacm {
- leaf cmd-read-default {
- type nacm:action-type;
- default "permit";
- description
- "Controls whether command read access is granted
- if no appropriate cmdrule is found for a
- particular command read request.";
- }
-
- leaf cmd-exec-default {
- type nacm:action-type;
- default "permit";
- description
- "Controls whether command exec access is granted
- if no appropriate cmdrule is found for a
- particular command exec request.";
- }
-
- leaf log-if-default-permit {
- type empty;
- description
- "If this leaf is present, access granted due to one of
- /nacm/read-default, /nacm/write-default, or /nacm/exec-default
- /nacm/cmd-read-default, or /nacm/cmd-exec-default
- being set to 'permit' is logged in the developer log.
- Otherwise, only denied access is logged. Mainly intended
- for debugging of rules.";
- }
-}
-```
-
-* If `read` access is requested, the value of `/nacm/cmd-read-default` determines whether access is permitted or denied.
-* If `exec` access is requested, the value of `/nacm/cmd-exec-default` determines whether access is permitted or denied.
-
-If `access` is permitted due to one of these default leafs, the `/nacm/log-if-default-permit`has the same effect as the `log-if-permit` leaf for the `cmdrule` lists.
-
-### RPC, Notification, and Data Authorization
-
-The rules in the `rule` list are used to control access to rpc operations, notifications, and data nodes defined in YANG models. Access to invocation of actions (`tailf:action`) is controlled with the same method as access to data nodes, with a request for `exec` access. `ietf-netconf-acm.yang` defines a `rule` entry as:
-
-```yang
-list rule {
- key "name";
- ordered-by user;
- description
- "One access control rule.
-
- Rules are processed in user-defined order until a match is
- found. A rule matches if 'module-name', 'rule-type', and
- 'access-operations' match the request. If a rule
- matches, the 'action' leaf determines if access is granted
- or not.";
-
- leaf name {
- type string {
- length "1..max";
- }
- description
- "Arbitrary name assigned to the rule.";
- }
-
- leaf module-name {
- type union {
- type matchall-string-type;
- type string;
- }
- default "*";
- description
- "Name of the module associated with this rule.
-
- This leaf matches if it has the value '*' or if the
- object being accessed is defined in the module with the
- specified module name.";
- }
- choice rule-type {
- description
- "This choice matches if all leafs present in the rule
- match the request. If no leafs are present, the
- choice matches all requests.";
- case protocol-operation {
- leaf rpc-name {
- type union {
- type matchall-string-type;
- type string;
- }
- description
- "This leaf matches if it has the value '*' or if
- its value equals the requested protocol operation
- name.";
- }
- }
- case notification {
- leaf notification-name {
- type union {
- type matchall-string-type;
- type string;
- }
- description
- "This leaf matches if it has the value '*' or if its
- value equals the requested notification name.";
- }
- }
- case data-node {
- leaf path {
- type node-instance-identifier;
- mandatory true;
- description
- "Data Node Instance Identifier associated with the
- data node controlled by this rule.
-
- Configuration data or state data instance
- identifiers start with a top-level data node. A
- complete instance identifier is required for this
- type of path value.
-
- The special value '/' refers to all possible
- data-store contents.";
- }
- }
- }
-
- leaf access-operations {
- type union {
- type matchall-string-type;
- type access-operations-type;
- }
- default "*";
- description
- "Access operations associated with this rule.
-
- This leaf matches if it has the value '*' or if the
- bit corresponding to the requested operation is set.";
- }
-
- leaf action {
- type action-type;
- mandatory true;
- description
- "The access control action associated with the
- rule. If a rule is determined to match a
- particular request, then this object is used
- to determine whether to permit or deny the
- request.";
- }
-
- leaf comment {
- type string;
- description
- "A textual description of the access rule.";
- }
-}
-```
-
-`tailf-acm` augments this with two additional leafs:
-
-```yang
-augment /nacm:nacm/nacm:rule-list/nacm:rule {
-
- leaf context {
- type union {
- type nacm:matchall-string-type;
- type string;
- }
- default "*";
- description
- "This leaf matches if it has the value '*' or if its value
- identifies the agent that is requesting access, e.g. 'netconf'
- for NETCONF, 'cli' for CLI, or 'webui' for Web UI.";
-
- }
-
- leaf log-if-permit {
- type empty;
- description
- "If this leaf is present, access granted due to this rule
- is logged in the developer log. Otherwise, only denied
- access is logged. Mainly intended for debugging of rules.";
- }
-}
-```
-
-Similar to the command access check, whenever a user through some agent tries to access an RPC, a notification, a data item, or an action, access is checked. For a rule to match, three or four leafs must match and when a match is found, the corresponding action is taken.
-
-We have the following leafs in the `rule` list entry.
-
-* `name`: The name of the rule. The rules are checked in order, with the ordering given by the YANG `ordered-by user` semantics, i.e., independent of the key values.
-* `module-name`: The `module-name` string is the name of the YANG module where the node being accessed is defined. The special value `*` (i.e., the default) matches all modules.\
- **Note**: Since the elements of the path to a given node may be defined in different YANG modules when augmentation is used, rules that have a value other than `*` for the `module-name` leaf may require that additional processing is done before a decision to permit or deny, or the access can be taken. Thus, if an XPath that completely identifies the nodes that the rule should apply to is given for the `path` leaf (see below), it may be best to leave the `module-name` leaf unset.
-* `rpc-name / notification-name / path`: This is a choice between three possible leafs that are used for matching, in addition to the `module-name`:
-* `rpc-name`: The name of an RPC operation, or `*` to match any RPC.
-* `notification-name`: the name of a notification, or `*` to match any notification.
-* `path`: A restricted XPath expression leading down into the populated XML tree. A rule with a path specified matches if it is equal to or shorter than the checked path. Several types of paths are allowed.
-
- 1. Tagpaths that do not contain any keys. For example `/ncs/live-device/live-status`.
- 2. Instantiated key: as in `/devices/device[name="x1"]/config/interface` matches the interface configuration for managed device "x1" It's possible to have partially instantiated paths only containing some keys instantiated - i.e. combinations of tagpaths and keypaths. Assuming a deeper tree, the path `/devices/device/config/interface[name="eth0"]` matches the `eth0` interface configuration on all managed devices.
- 3. The wild card at the end as in: `/services/web-site/*` does not match the website service instances, but rather all children of the website service instances.
- 4. The leading/trailing whitespace as in: `" /devices/device/config "` are ignored.
-
- Thus, the path in a rule is matched against the path in the attempted data access. If the attempted access has a path that is equal to or longer than the rule path - we have a match.\
- \
- If none of the leafs `rpc-name`, `notification-name`, or `path` are set, the rule matches for any RPC, notification, data, or action access.
-* `context`: `context` is either of the strings `cli`, `netconf`, `webui`, `snmp`, or `*` for a data rule. Furthermore, when we initiate user sessions from MAAPI, we can choose any string we want. Similarly to command rules, we can differentiate access depending on which agent is used to gain access.
-* `access-operations`: `access-operations` is used to match the operation that NSO tries to perform. It must be one or more of the "create", "read", "update", "delete" and "exec" values from the `access-operations-type` bits type definition in `ietf-netconf-acm.yang`, or "\*" to match any operation.
-* `action`: This leaf has the same characteristics as the `action` leaf for command access.
-* `log-if-permit`: This leaf has the same characteristics as the `log-if-permit` leaf for command access.
-* `comment`: An optional textual description of the rule.
-
-If no matching rule is found in any of the `rule` lists in any `rule-list` entry that matches the user's groups, the data model node for which access is requested is examined for the presence of the NACM extensions:
-
-* If the `nacm:default-deny-all` extension is specified for the data model node, the access is denied.
-* If the `nacm:default-deny-write` extension is specified for the data model node, and `create`, `update`, or `delete` access is requested, the access is denied.
-
-If examination of the NACM extensions did not result in access being denied, the value (`permit` or `deny`) of the relevant default leaf is examined:
-
-* If `read` access is requested, the value of `/nacm/read-default` determines whether access is permitted or denied.
-* If `create`, `update`, or `delete` access is requested, the value of `/nacm/write-default` determines whether access is permitted or denied.
-* If `exec` access is requested, the value of `/nacm/exec-default` determines whether access is permitted or denied.
-
-If access is permitted due to one of these default leafs, this augmentation from `tailf-acm.yang` is relevant:
-
-```yang
-augment /nacm:nacm {
- ...
- leaf log-if-default-permit {
- type empty;
- description
- "If this leaf is present, access granted due to one of
- /nacm/read-default, /nacm/write-default, /nacm/exec-default
- /nacm/cmd-read-default, or /nacm/cmd-exec-default
- being set to 'permit' is logged in the developer log.
- Otherwise, only denied access is logged. Mainly intended
- for debugging of rules.";
- }
-}
-```
-
-I.e., it has the same effect as the `log-if-permit` leaf for the `rule` lists, but for the case where the value of one of the default leafs permits access.
-
-When NSO executes a command, the command rules in the authorization database are searched, The rules are tried in order, as described above. When a rule matches the operation (command) that NSO is attempting, the action of the matching rule is applied — whether permit or deny.
-
-When actual data access is attempted, the data rules are searched. E.g., when a user attempts to execute `delete aaa` in the CLI, the user needs delete access to the entire tree `/aaa`.
-
-Another example is if a CLI user writes `show configuration aaa` TAB, it suffices to have read access to at least one item below `/aaa` for the CLI to perform the TAB completion. If no rule matches or an explicit deny rule is found, the CLI will not TAB-complete.
-
-Yet another example is if a user tries to execute `delete aaa authentication users`, we need to perform a check on the paths `/aaa` and `/aaa/authentication` before attempting to delete the sub-tree. Say that we have a rule for path `/aaa/authentication/users` which is a permit rule and we have a subsequent rule for path `/aaa` which is a deny rule. With this rule set the user should indeed be allowed to delete the entire `/aaa/authentication/users` tree but not the `/aaa` tree nor the `/aaa/authentication` tree.
-
-We have two variations on how the rules are processed. The easy case is when we actually try to read or write an item in the configuration database. The execution goes like this:
-
-```
-foreach rule {
- if (match(rule, path)) {
- return rule.action;
- }
-}
-```
-
-The second case is when we execute TAB completion in the CLI. This is more complicated. The execution goes like this:
-
-```
-rules = select_rules_that_may_match(rules, path);
-if (any_rule_is_permit(rules))
- return permit;
-else
- return deny;
-```
-
-The idea is that as we traverse (through TAB) down the XML tree, as long as there is at least one rule that can possibly match later, once we have more data, we must continue. For example, assume we have:
-
-1. `"/system/config/foo" --> permit`
-2. `"/system/config" --> deny`
-
-If we in the CLI stand at `"/system/config"` and hit TAB we want the CLI to show `foo` as a completion, but none of the other nodes that exist under `/system/config`. Whereas if we try to execute `delete /system/config` the request must be rejected.
-
-By default, NACM rules are configured for the entire `tailf:action` or YANG 1.1 `action` statements, but not for `input` statement child leafs. To override this behavior, and enable NACM rules on `input` leafs, set the following parameter to 'true': `/ncs-config/aaa/action-input-rules/enabled`. When enabled all action input leafs given to an action will be validated for NACM rules. If broad 'deny' NACM rules are used, you might need to add 'permit' rules for the affected action input leafs to allow actions to be used with parameters.
-
-### NACM Rules and Services
-
-By design NACM rules are ignored for changes done by services — FASTMAP, Reactive FASTMAP, or Nano services. The reasoning behind this is that a service package can be seen as a controlled way to provide limited access to devices for a user group that is not allowed to apply arbitrary changes on the devices.
-
-However, there are NSO installations where this behavior is not desired, and NSO administrators want to enforce NACM rules even on changes done by services. For this purpose, the leaf called `/nacm/enforce-nacm-on-services` is provided. By default, it is set to `false`.
-
-Note however that currently, even with this leaf set to true, there are limitations. Namely, the post-actions for nano-services are run in a user session without any access checks. Besides that, NACM rules are not enforced on the read operations performed in the service callbacks.
-
-It might be desirable to deny everything for a user group and only allow access to a specific service. This pattern could be used to allow an operator to provision the service, but deny everything else. While this pattern works for a normal FASTMAP service, there are some caveats for stacked services, Reactive FASTMAP, and Nano services. For these kinds of services, in addition to the service itself, access should be provided to the user group for the following paths:
-
-* In case of stacked services, the user group needs read and write access to the leaf `private/re-deploy-counter` under the bottom service. Otherwise, the user will not be able to redeploy the service.
-* In the case of Reactive FASTMAP or Nano services, the user group needs read and write access to the following:
- * `/zombies`
- * `/side-effect-queue`
- * `/kickers`
-
-### Device Group Authorization
-
-In deployments with many devices, it can become cumbersome to handle data authorization per device. To help with this there is a rule type that works on device group membership (for more on device groups, see [Device Groups](../../operation-and-usage/operations/nso-device-manager.md#user_guide.devicemanager.device_groups)). To do this, devices are added to different device groups, and the rule type `device-group-rule` is used.
-
-The IETF NACM rule type is augmented with a new rule type named `device-group-rule` which contains a leafref to the device groups. See the following example.
-
-{% code title="Device Group Model Augmentation" %}
-```yang
-augment "/nacm:nacm/nacm:rule-list/nacm:rule/nacm:rule-type" {
- case device-group-rule {
- leaf device-group {
- type leafref {
- path "/ncs:devices/ncs:device-group/ncs:name";
- }
- description
- "Which device group this rule applies to.";
- }
- }
-}
-```
-{% endcode %}
-
-In the example below, we configure two device groups based on different regions and add devices to them.
-
-{% code title="Device Group Configuration" %}
-```xml
-
-
- us_east
- cli0
- gen0
-
-
- us_west
- nc0
-
-
-```
-{% endcode %}
-
-In the example below, we configure an operator for the `us_east` region:
-
-{% code title="NACM Group Configuration" %}
-```xml
-
-
-
- us_east
- us_east_oper
-
-
-
-```
-{% endcode %}
-
-\
-In the example below, we configure the device group rules and refer to the device group and the `us_east` group.
-
-{% code title="Device Group Authorization Rules" %}
-```xml
-
-
- us_east
- us_east
-
- us_east_read_permit
- us_east
- read
- permit
-
-
- us_east_create_permit
- us_east
- create
- permit
-
-
- us_east_update_permit
- us_east
- update
- permit
-
-
- us_east_delete_permit
- us_east
- delete
- permit
-
-
-
-```
-{% endcode %}
-
-In summary device group authorization gives a more compact configuration for deployments where devices can be grouped and authorization can be done on a device group basis.
-
-Modifications on the device-group subtree are recommended to be controlled by a limited set of users.
-
-### Authorization Examples
-
-Assume that we have two groups, `admin` and `oper`. We want `admin` to be able to see and edit the XML tree rooted at `/aaa`, but we do not want users who are members of the `oper` group to even see the `/aaa` tree. We would have the following rule list and rule entries. Note, here we use the XML data from `tailf-aaa.yang` to exemplify. The examples apply to all data, for all data models loaded into the system.
-
-```xml
-
- admin
- admin
-
- tailf-aaa
- tailf-aaa
- /
- read create update delete
- permit
-
-
-
- oper
- oper
-
- tailf-aaa
- tailf-aaa
- /
- read create update delete
- deny
-
-
-```
-
-If we do not want the members of `oper` to be able to execute the NETCONF operation `edit-config`, we define the following rule list and rule entries:
-
-```xml
-
- oper
- oper
-
- edit-config
- edit-config
- netconf
- exec
- deny
-
-
-```
-
-To spell it out, the above defines four elements to match. If NSO tries to perform a `netconf` operation, which is the operation `edit-config`, and the user who runs the command is a member of the `oper` group, and finally it is an `exec` (execute) operation, we have a match. If so, the action is `deny`.
-
-The `path` leaf can be used to specify explicit paths into the XML tree using XPath syntax. For example the following:
-
-```xml
-
- admin
- admin
-
- bob-password
- /aaa/authentication/users/user[name='bob']/password
- cli
- read update
- permit
-
-
-```
-
-Explicitly allows the `admin` group to change the password for precisely the `bob` user when the user is using the CLI. Had `path` been `/aaa/authentication/users/user/password` the rule would apply to all password elements for all users. Since the `path` leaf completely identifies the nodes that the rule applies to, we do not need to give `tailf-aaa` for the `module-name` leaf.
-
-NSO applies variable substitution, whereby the username of the logged-in user can be used in a `path`. Thus:
-
-```xml
-
- admin
- admin
-
- user-password
- /aaa/authentication/users/user[name='$USER']/password
- cli
- read update
- permit
-
-
-```
-
-The above rule allows all users that are part of the `admin` group to change their own passwords only.
-
-A member of `oper` is able to execute NETCONF operation `action` if that member has `exec` access on NETCONF RPC `action` operation, `read` access on all instances in the hierarchy of data nodes that identifies the specific action in the data store, and `exec` access on the specific action. For example, an action is defined as below.
-
-```yang
-container test {
- action double {
- input {
- leaf number {
- type uint32;
- }
- }
- output {
- leaf result {
- type uint32;
- }
- }
- }
-}
-```
-
-To be able to execute `double` action through NETCONF RPC, the members of `oper` need the following rule list and rule entries.
-
-```xml
-
- oper
- oper
-
-
- allow-netconf-rpc-action
- action
- netconf
- exec
- permit
-
-
- allow-read-test
- /test
- read
- permit
-
-
- allow-exec-double
- /test/double
- exec
- permit
-
-
-```
-
-Or, a simpler rule set as the following.
-
-```xml
-
- oper
- oper
-
-
- allow-netconf-rpc-action
- action
- netconf
- exec
- permit
-
-
- allow-exec-double
- /test
- read exec
- permit
-
-
-```
-
-Finally, if we wish members of the `oper` group to never be able to execute the `request system reboot` command, also available as a `reboot` NETCONF rpc, we have:
-
-```xml
-
- oper
- oper
-
-
- request-system-reboot
- cli
- request system reboot
- exec
- deny
-
-
-
-
-
-
- request-reboot
- cli
- request reboot
- exec
- deny
-
-
-
- netconf-reboot
- reboot
- netconf
- exec
- deny
-
-
-
-```
-
-### Troubleshooting NACM Rules
-
-In this section, we list some tips to make it easier to troubleshoot NACM rules.
-
-{% hint style="success" %}
-Use `log-if-permit` and `log-if-default-permit` together with the developer log level set to `trace`.
-{% endhint %}
-
-Use the `tailf-acm.yang` module augmentation `log-if-permit` leaf for rules with `action` `permit`. When those rules trigger a permit action a trace entry is added to the developer log. To see trace entries make sure the `/ncs-config/logs/developer-log-level` is set to `trace`.
-
-If you have a default rule with `action` `permit` you can use the `log-if-default-permit` leaf instead.
-
-{% hint style="success" %}
-NACM rules are read at the start of the session and are used throughout the session.
-{% endhint %}
-
-When a user session is created it will gather the authorization rules that are relevant for that user's group(s). The rules are used throughout the user session lifetime. When we update the AAA rules the active sessions are not affected. For example, if an administrator updates the NACM rules in one session the update will not apply to any other currently active sessions. The updates will apply to new sessions created after the update.
-
-{% hint style="success" %}
-Explicitly state NACM groups when starting the CLI. For example `ncs_cli -u oper -g oper`.
-{% endhint %}
-
-It is the user's group membership that determines what rules apply. Starting the CLI using the `ncs_cli` command without explicitly setting the groups, defaults to the actual UNIX groups the user is a member of. On Darwin, one of the default groups is usually `admin`, which can lead to the wrong group being used.
-
-{% hint style="success" %}
-Be careful with namespaces in rulepaths.
-{% endhint %}
-
-Unless a rulepath is made explicit by specifying namespace it will apply to that specific path in all namespaces. Below we show parts of an example from [RFC 8341](https://tools.ietf.org/html/rfc8341), where the `path` element has an `xmlns` attribute and the path is namespaced. If these would not have been namespaced, the rules would not behave as expected.
-
-{% code title="Example: Excerpt from RFC 8341 Appendix A.4" %}
-```xml
-
- permit-acme-config
-
- /acme:acme-netconf/acme:config-parameters
-
- ...
-```
-{% endcode %}
-
-\
-In the example above (Excerpt from RFC 8341 Appendix A.4), the path is namespaced.
-
-## The AAA Cache
-
-NSO's AAA subsystem will cache the AAA information in order to speed up the authorization process. This cache must be updated whenever there is a change to the AAA information. The mechanism for this update depends on how the AAA information is stored, as described in the following two sections.
-
-### Populating AAA using CDB
-
-To start NSO, the data models for AAA must be loaded. The defaults in the case that no actual data is loaded for these models allow all read and exec access, while write access is denied. Access may still be further restricted by the NACM extensions, though — e.g., the `/nacm` container has `nacm:default-deny-all`, meaning that not even read access is allowed if no data is loaded.
-
-The NSO installation ships with an XML initialization file containing AAA configuration. The file is called `aaa_init.xml` and is, by default, copied to the CDB directory by the NSO install scripts.
-
-The local installation variant, targeting development only, defines two users, `admin` and `oper` with passwords set to `admin` and `oper` respectively for authentication. The two users belong to user groups with NACM rules restricting their authorization level. The system installation `aaa_init.xml` variant, targeting production deployment, defines NACM rules only as users are, by default, authenticated using PAM. The NACM rules target two user groups, `ncsadmin` and `ncsoper`. Users belonging to the `ncsoper` group are limited to read-only access.
-
-{% hint style="info" %}
-The default `aaa_init.xml` file provided with the NSO system installation must not be used as-is in a deployment without reviewing and verifying that every NACM rule in the file matches
-{% endhint %}
-
-Normally the AAA data will be stored as configuration in CDB. This allows for changes to be made through NSO's transaction-based configuration management. In this case, the AAA cache will be updated automatically when changes are made to the AAA data. If changing the AAA data via NSO's configuration management is not possible or desirable, it is alternatively possible to use the CDB operational data store for AAA data. In this case, the AAA cache can be updated either explicitly e.g. by using the `maapi_aaa_reload()` function, see the [confd\_lib\_maapi(3)](../../resources/man/confd_lib_maapi.3.md) in the Manual Pages manual page, or by triggering a subscription notification by using the subscription lock when updating the CDB operational data store, see [Using CDB](../../development/core-concepts/using-cdb.md) in Development.
-
-### Hiding the AAA Tree
-
-Some applications may not want to expose the AAA data to end users in the CLI or the Web UI. Two reasonable approaches exist here and both rely on the `tailf:export` statement. If a module has `tailf:export none` it will be invisible to all agents. We can then either use a transform whereby we define another AAA model and write a transform program that maps our AAA data to the data that must exist in `tailf-aaa.yang` and `ietf-netconf-acm.yang`. This way we can choose to export and and expose an entirely different AAA model.
-
-Yet another very easy way out, is to define a set of static AAA rules whereby a set of fixed users and fixed groups have fixed access to our configuration data. Possibly the only field we wish to manipulate is the password field.
diff --git a/administration/management/high-availability.md b/administration/management/high-availability.md
deleted file mode 100644
index b1272fce..00000000
--- a/administration/management/high-availability.md
+++ /dev/null
@@ -1,1236 +0,0 @@
----
-description: Implement redundancy in your deployment using High Availability (HA) setup.
----
-
-# High Availability
-
-As a single NSO node can fail or lose network connectivity, you can configure multiple nodes in a highly available (HA) setup, which replicates the CDB configuration and operational data across participating nodes. It allows the system to continue functioning even when some nodes are inoperable.
-
-The replication architecture is that of one active primary and a number of secondaries. This means all configuration write operations must occur on the primary, which distributes the updates to the secondaries.
-
-Operational data in the CDB may be replicated or not based on the `tailf:persistent` statement in the data model. If replicated, operational data writes can only be performed on the primary, whereas non-replicated operational data can also be written on the secondaries.
-
-Replication is supported in several different architectural setups. For example, two-node active/standby designs as well as multi-node clusters with runtime software upgrade.
-
-
Primary - Secondary Configuration
-
-
One Primary - Several Secondaries
-
-This feature is independent of but compatible with the [Layered Service Architecture (LSA)](../advanced-topics/layered-service-architecture.md), which also configures multiple NSO nodes to provide additional scalability. When the following text simply refers to a cluster, it identifies the set of NSO nodes participating in the same HA group, not an LSA cluster, which is a separate concept.
-
-NSO supports the following options for implementing an HA setup to cater to the widest possible range of use cases (only one can be used at a time):
-
-* [**HA Raft**](high-availability.md#ug.ha.raft): Using a modern, consensus-based algorithm, it offers a robust, hands-off solution that works best in the majority of cases.
-* [**Rule-based HA**](high-availability.md#ug.ha.builtin): A less sophisticated solution that allows you to influence the primary selection but may require occasional manual operator action.
-* [**External HA**](high-availability.md#ferret): NSO only provides data replication; all other functions, such as primary selection and group membership management, are performed by an external application, using the HA framework (HAFW).
-
-In addition to data replication, having a fixed address to connect to the current primary in an HA group greatly simplifies access for operators, users, and other systems alike. Use [Tail-f HCC Package](high-availability.md#ug.ha.hcc) or an [external load balancer](high-availability.md#ug.ha.lb) to manage it.
-
-## NSO HA Raft
-
-[Raft](https://raft.github.io/) is a consensus algorithm that reliably distributes a set of changes to a group of nodes and robustly handles network and node failure. It can operate in the face of multiple, subsequent failures, while also allowing a previously failed or disconnected node to automatically rejoin the cluster without risk of data conflicts.
-
-Compared to traditional fail-over HA solutions, Raft relies on the consensus of the participating nodes, which addresses the so-called “split-brain” problem, where multiple nodes assume a primary role. This problem is especially characteristic of two-node systems, where it is impossible for a single node on its own to distinguish between losing network connectivity itself versus the other node malfunctioning. For this reason, Raft requires at least three nodes in the cluster.
-
-Raft achieves robustness by requiring at least three nodes in the HA cluster. Three is the recommended cluster size, allowing the cluster to operate in the face of a single node failure. In case you need to tolerate two nodes failing simultaneously, you can add two additional nodes, for a 5-node cluster. However, permanently having more than five nodes in a single cluster is currently not recommended since Raft requires the majority of the currently configured nodes in the cluster to reach consensus. Without the consensus, the cluster cannot function.
-
-You can start a sample HA Raft cluster using the [examples.ncs/high-availability/raft-cluster](https://github.com/NSO-developer/nso-examples/tree/6.6/high-availability/raft-cluster) example to test it out. The scripts in the example show various aspects of cluster setup and operation, which are further described in the rest of this section.
-
-Optionally, examples using separate containers for each HA Raft cluster member with NSO system installations are available and referenced in the [examples.ncs/high-availability/hcc](https://github.com/NSO-developer/nso-examples/tree/6.6/high-availability/hcc) example in the NSO example set.
-
-### Overview of Raft Operation
-
-The Raft algorithm works with the concept of (election) terms. In each term, nodes in the cluster vote for a leader. The leader is elected when it receives the majority of the votes. Since each node only votes for a single leader in a given term, there can only be one leader in the cluster for this term.
-
-Once elected, the leader becomes responsible for distributing the changes and ensuring consensus in the cluster for that term. Consensus means that the majority of the participating nodes must confirm a change before it is accepted. This is required for the system to ensure no changes ever get overwritten and provide reliability guarantees. On the other hand, it also means more than half of the nodes must be available for normal operation.
-
-Changes can only be performed on the leader, that will accept the change after the majority of the cluster nodes confirm it. This is the reason a typical Raft cluster has an odd number of nodes; exactly half of the nodes agreeing on a change is not sufficient. It also makes a two-node cluster (or any even number of nodes in a cluster) impractical; the system as a whole is no more available than it is with one fewer node.
-
-If the connection to the leader is broken, such as during a network partition, the nodes start a new term and a new election. Another node can become a leader if it gets the majority of the votes of all nodes initially in the cluster. While gathering votes, the node has the status of a candidate. In case multiple nodes assume candidate status, a split-vote scenario may occur, which is resolved by starting a fresh election until a candidate secures the majority vote.
-
-If it happens that there aren't enough reachable nodes to obtain a majority, a candidate can stay in the candidate state for an indefinite time. Otherwise, when a node votes for a candidate, it becomes a follower and stays a follower in this term, regardless if the candidate is elected or not.
-
-Additionally, the NSO node can also be in the stalled state, if HA Raft is enabled but the node has not joined a cluster.
-
-### Node Names and Certificates
-
-Each node in an HA Raft cluster needs a unique name. Names are usually in the `ADDRESS` format, where `ADDRESS` identifies a network host where the NSO process is running, such as a fully qualified domain name (FQDN) or an IPv4 address.
-
-Other nodes in the cluster must be able to resolve and reach the `ADDRESS`, which creates a dependency on the DNS if you use domain names instead of IP addresses.
-
-Limitations of the underlying platform place a constraint on the format of `ADDRESS`, which can't be a simple short name (without a dot), even if the system is able to resolve such a name using `hosts` file or a similar mechanism.
-
-You specify the node address in the `ncs.conf` file as the value for `node-address`, under the `listen` container. You can also use the full node name (with the “@” character), however, that is usually unnecessary as the system prepends `ncsd@` as-needed.
-
-Another aspect in which `ADDRESS` plays a role is authentication. The HA system uses mutual TLS to secure communication between cluster nodes. This requires you to configure a trusted Certificate Authority (CA) and a key/certificate pair for each node. When nodes connect, they check that the certificate of the peer validates against the CA and matches the `ADDRESS` of the peer.
-
-{% hint style="info" %}
-Consider that TLS not only verifies that the certificate/key pair comes from a trusted source (certificate is signed by a trusted CA), it also checks that the certificate matches the host you are connecting to. Host A may have a valid certificate and key, signed by a trusted CA, however, if the certificate is for another host, say host B, the authentication will fail.
-{% endhint %}
-
-In most cases, this means the `ADDRESS` must appear in the node certificate's Subject Alternative Name (SAN) extension, as `dNSName` (see [RFC2459](https://datatracker.ietf.org/doc/html/rfc2459)).
-
-Create and use a self-signed CA to secure the NSO HA Raft cluster. A self-signed CA is the only secure option. The CA should only be used to sign the certificates of the member nodes in one NSO HA Raft cluster. It is critical for security that the CA is not used to sign any other certificates. Any certificate signed by the CA can be used to gain complete control of the NSO HA Raft cluster.
-
-See the [examples.ncs/high-availability/raft-cluster](https://github.com/NSO-developer/nso-examples/tree/6.6/high-availability/raft-cluster) example for one way to set up a self-signed CA and provision individual node certificates. The example uses a shell script `gen_tls_certs.sh` that invokes the `openssl` command. Consult the section [Recipe for a Self-signed CA](high-availability.md#recipe-for-a-self-signed-ca) for using it independently of the example.
-
-Examples using separate containers for each HA Raft cluster member with NSO system installations that use a variant of the `gen_tls_certs.sh` script are available and referenced in the [examples.ncs/high-availability/hcc](https://github.com/NSO-developer/nso-examples/tree/6.6/high-availability/hcc) example in the NSO example set.
-
-{% hint style="info" %}
-When using an IP address instead of a DNS name for node's `ADDRESS`, you must add the IP address to the certificate's dNSName SAN field (adding it to iPAddress field only is insufficient). This is a known limitation in the current version.
-{% endhint %}
-
-The following is a HA Raft configuration snippet for `ncs.conf` that includes certificate settings and a sample `ADDRESS`:
-
-```xml
-
-
-
- 198.51.100.10
-
-
- ${NCS_CONFIG_DIR}/dist/ssl/cert/myca.crt
- ${NCS_CONFIG_DIR}/dist/ssl/cert/node-100-10.crt
- ${NCS_CONFIG_DIR}/dist/ssl/cert/node-100-10.key
-
-
-```
-
-### Recipe for a Self-signed CA
-
-HA Raft uses a standard TLS protocol with public key cryptography for securing cross-node communication, where each node requires a separate public/private key pair and a corresponding certificate. Key and certificate management is a broad topic and is critical to the overall security of the system.
-
-The following text provides a recipe for generating certificates using a self-signed CA. It uses strong cryptography and algorithms that are deemed suitable for production use. However, it makes a few assumptions that may not be appropriate for all environments. Always consider how they affect your own deployment and consult a security professional if in doubt.
-
-The recipe makes the following assumptions:
-
-* You use a secured workstation or server to run these commands and handle the generated keys with care. In particular, you must copy the generated keys to NSO nodes in a secure fashion, such as using `scp`.
-* The CA is used solely for a single NSO HA Raft cluster, with certificates valid for 10 years, and provides no CRL. If a single key or host is compromised, a new CA and all key/certificate pairs must be recreated and reprovisioned in the cluster.
-* Keys and signatures based on ecdsa-with-sha384/P-384 are sufficiently secure for the vast majority of environments. However, if your organization has specific requirements, be sure to follow those.
-
-To use this recipe:
-
-* First prepare a working environment on a secure host by creating a new directory and copying the `gen_tls_certs.sh` script from [examples.ncs/high-availability/raft-cluster](https://github.com/NSO-developer/nso-examples/tree/6.6/high-availability/raft-cluster) into it. Additionally, ensure that the `openssl` command, version 1.1 or later, is available and the system time is set correctly. Supposing that you have a cluster named `lower-west`, you might run:
-
-```bash
-$ mkdir raft-ca-lower-west
-$ cd raft-ca-lower-west
-$ cp $NCS_DIR/examples.ncs/high-availability/raft-cluster/gen_tls_certs.sh .
-$ openssl version
-$ date
-```
-
-{% hint style="info" %}
-Including cluster name in the directory name helps distinguish certificates of one HA cluster from another, such as when using an LSA deployment in an HA configuration.
-{% endhint %}
-
-The recipe relies on the `gen_tls_certs.sh` script to generate individual certificates. For clusters using FQDN node addresses, invoke the script with full hostnames of all the participating nodes. For example:
-
-```bash
-$ ./gen_tls_certs.sh node1.example.org node2.example.org node3.example.org
-```
-
-{% hint style="info" %}
-Using only hostnames, e.g. `node1`, will not work.
-{% endhint %}
-
-If your HA cluster is using IP addresses instead, add the `-a` option to the command and list the IPs:
-
-```bash
-$ ./gen_tls_certs.sh -a 192.0.2.1 192.0.2.2 192.0.2.3
-```
-
-The script outputs the location of the relevant files and you should securely transfer each set of files to the corresponding NSO node. For each node, transfer only the three files: `ca.crt`, _`host`_`.crt`, and _`host`_`.key`.
-
-* Once the certificates are deployed, you can check their validity with the `openssl verify` command:
-
-```bash
-$ openssl verify -CAfile ssl/certs/ca.crt ssl/certs/node1.example.org.crt
-```
-
-This command takes into account the current time and can be used during troubleshooting. It can also display information contained in the certificate if you use the `openssl x509 -text -in ssl/certs/`_`node1.example.org`_`.crt -noout` variant. The latter form allows you to inspect the incorporated hostname/IP address and certificate validity dates.
-
-### Actions
-
-NSO HA Raft can be controlled through several actions. All actions are found under `/ha-raft/`. In the best-case scenario, you will only need the `create-cluster` action to initialize the cluster and the `read-only` and `create-cluster` actions when upgrading the NSO version.
-
-The available actions are listed below:
-
-
Action
Description
create-cluster
Initialize an HA Raft cluster. This action should only be invoked once to form a new cluster when no HA Raft log exists. The members of the HA Raft cluster consist of the NCS node where the /ha-raft/create-clusteraction is invoked, which will become the leader of the cluster; and the members specified by the member parameter.
adjust-membership
Add or remove an HA node from the HA Raft cluster.
disconnect
Disconnect an HA node from all remaining nodes. In the event of revoking a TLS certificate, invoke this action to disconnect the already established connections to the node with the revoked certificate. A disconnected node with a valid TLS certificate may re-establish the connection.
reset
Reset the (disabled) local node to make the leader perform a full sync to this local node if an HA Raft cluster exists. If reset is performed on the leader node, the node will step down from leadership and it will be synced by the next leader node. An HA Raft member will change role to disabled if ncs.conf has incompatible changes to the ncs.conf on the leader; a member will also change role to disabled if there are non-recoverable failures upon opening a snapshot. See the /ha-raft/status/disable-reason leaf for the reason. Set force to true to override reset when /ha-raft/status/role is not set to disabled.
handover
Handover leadership to another member of the HA Raft cluster or step down from leadership and start a new election.
read-only
Toggle read-only mode. If the mode is true no configuration changes can occur.
-
-### Network and `ncs.conf` Prerequisites
-
-In addition to the network connectivity required for the normal operation of a standalone NSO node, nodes in the HA Raft cluster must be able to initiate TCP connections from a random ephemeral client port to the following ports on other nodes:
-
-* Port 4369
-* Ports in the range 4370-4399 (configurable)
-
-You can change the ports in the second listed range from the default of 4370-4399. Use the `min-port` and `max-port` settings of the `ha-raft/listen` container.
-
-The Raft implementation does not impose any other hard limits on the network but you should keep in mind that consensus requires communication with other nodes in the cluster. A high round-trip latency between cluster nodes is likely to negatively impact the transaction throughput of the system.
-
-The HA Raft cluster also requires compatible `ncs.conf` files among the member nodes. In particular, `/ncs-config/cdb/operational/enabled` and `/ncs-config/rollback/enabled` values affect replication behavior and must match. Likewise, each member must have the same set of encryption keys and the keys cannot be changed while the cluster is in operation.
-
-To update the `ncs.conf` configuration, you must manually update the copy on each member node, making sure the new versions contain compatible values. Then perform the reload on the leader and the follower members will automatically reload their copies of the configuration file as well.
-
-If a node is a cluster member but has been configured with a new, incompatible `ncs.conf` file, it gets automatically disabled. See the `/ha-raft/status/disabled-reason` for reason. You can re-enable the node with the `ha-raft reset` command, once you have reconciled the incompatibilities.
-
-### Connected Nodes and Node Discovery
-
-Raft has a notion of cluster configuration, in particular, how many and which members the cluster has. You define member nodes when you first initialize the cluster with the `create-cluster` command or use the `adjust-membership` command. The member nodes allow the cluster to know how many nodes are needed for consensus and similar.
-
-However, not all cluster members may be reachable or alive all the time. Raft implementation in NSO uses TCP connections between nodes to transport data. The TCP connections are authenticated and encrypted using TLS by default (see [Security Considerations](high-availability.md#ch_ha.raft_security)). A working connection between nodes is essential for the cluster to function but a number of factors, such as firewall rules or expired/invalid certificates, can prevent the connection from establishing.
-
-Therefore, NSO distinguishes between configured member nodes and nodes to which it has established a working transport connection. The latter are called connected nodes. In a normal, fully working, and properly configured cluster, the connected nodes will be the same as member nodes (except for the current node).
-
-To help troubleshoot connectivity issues without affecting cluster operation, connected nodes will show even nodes that are not actively participating in the cluster but have established a transport connection to nodes in the cluster. The optional discovery mechanism, described next, relies on this functionality.
-
-NSO includes a mechanism that simplifies the initial cluster setup by enumerating known nodes. This mechanism uses a set of seed nodes to discover all connectable nodes, which can then be used with the `create-cluster` command to form a Raft cluster.
-
-When you specify one or more nodes with the `/ha-raft/seed-nodes/seed-node` setting in the `ncs.conf` file, the current node tries to establish a connection to these seed nodes, in order to discover the list of all nodes potentially participating in the cluster. For the discovery to work properly, all other nodes must also use seed nodes and the set of seed nodes must overlap. The recommended practice is to use the same set of seed nodes on every participating node.
-
-Along with providing an autocompletion list for the `create-cluster` command, this feature streamlines the discovery of node names when using NSO in containerized or other dynamic environments, where node addresses are not known in advance.
-
-### Initial Cluster Setup
-
-Creating a new HA cluster consists of two parts: configuring the individual nodes and running the `create-cluster` action.
-
-First, you must update the `ncs.conf` configuration file for each node. All HA Raft configuration comes under the `/ncs-config/ha-raft` element.
-
-As part of the configuration, you must:
-
-* Enable HA Raft functionality through the `enabled` leaf.
-* Set `node-address` and the corresponding TLS parameters (see [Node Names and Certificates](high-availability.md#ch_ha.raft_names)).
-* Identify the cluster this node belongs to with `cluster-name`.
-* Reload or restart the NSO process (if already running).
-* Repeat the preceding steps for every participating node.
-* Enable read-only mode on designated leader to avoid potential sync issues in cluster formation.
-* Invoke the `create-cluster` action.
-
-The cluster name is simply a character string that uniquely identifies this HA cluster. The nodes in the cluster must use the same cluster name or they will refuse to establish a connection. This setting helps prevent mistakenly adding a node to the wrong cluster when multiple clusters are in operation, such as in an LSA setup.
-
-{% code title="Sample HA Raft config for a cluster node" %}
-```xml
-
- true
- sherwood
-
- ash.example.org
-
-
- ${NCS_CONFIG_DIR}/dist/ssl/cert/myca.crt
- ${NCS_CONFIG_DIR}/dist/ssl/cert/ash.crt
- ${NCS_CONFIG_DIR}/dist/ssl/cert/ash.key
-
-
- birch.example.org
-
-
-```
-{% endcode %}
-
-With all the nodes configured and running, connect to the node that you would like to become the initial leader and invoke the `ha-raft create-cluster` action. The action takes a list of nodes identified by their names. If you have configured `seed-nodes`, you will get auto-completion support, otherwise, you have to type in the names of the nodes yourself.
-
-This action makes the current node a cluster leader and joins the other specified nodes to the newly created cluster. For example:
-
-```bash
-admin@ncs# request ha-raft read-only-mode true
-admin@ncs# ha-raft create-cluster member [ birch.example.org cedar.example.org ]
-admin@ncs# show ha-raft
-ha-raft status role leader
-ha-raft status leader ash.example.org
-ha-raft status member [ ash.example.org birch.example.org cedar.example.org ]
-ha-raft status connected-node [ birch.example.org cedar.example.org ]
-ha-raft status local-node ash.example.org
-...
-admin@ncs# request ha-raft read-only-mode false
-```
-
-You can use the `show ha-raft` command on any node to inspect the status of the HA Raft cluster. The output includes the current cluster leader and members according to this node, as well as information about the local node, such as node name (`local-node`) and role. The `status/connected-node` list contains the names of the nodes with which this node has active network connections.
-
-
-
-show ha-raft Field Definitions
-
-The command `show ha-raft` is used in NSO to display the current state of the HA Raft cluster. The output typically includes the following information:
-
-* The role of the local node (for example, whether it is the `leader`, `follower`, `candidate`, or `stalled`).
-* The leader of the cluster, if one has been elected.
-* The list of member nodes that belong to the HA Raft cluster.
-* The connected nodes, which are the nodes with which the local node currently has active RAFT communication.
-* The local node information, detailing the node’s name and status.
-
-This command is useful for both verifying that the HA Raft cluster is set up correctly and for troubleshooting issues by checking the connectivity and role assignments of the nodes. Some noteworthy terms of output are defined in the table below.
-
-
Term
Definition
role
The current node’s Raft role (leader, follower, or candidate). Occasionally, in NSO, a node might appear as stalled if it has lost contact with the leader or quorum.
leader
The current known leader of the cluster.
member
A node that is part of the RAFT consensus group (i.e., a voting participant, not an observer). Leaders, followers, and candidates are members; observers are not.
connected-node
The nodes this instance is connected to.
local-node
The name of the current node.
lag
The number of indices the replicated log is behind the leader node. A value of 0 means no lag — the node's RAFT log is fully up-to-date with the leader. The larger the value, the more out-of-sync the node is, which may indicate a replication or connectivity issue.
index
The last replicated HA Raft log index, i.e., this is the last log entry replicated to a node.
state
The synchronization status of the node’s RAFT log. Common values include:
in-sync: The node is up-to-date with the leader.
behind: The node is lagging behind in log replication.
unreachable: The node is not communicating with one or more RAFT peers, i.e., the node cannot reach the leader or other RAFT peers, preventing synchronization.
requires-snapshot: The node has fallen too far behind to catch up using logs and needs a full snapshot from the leader.
current-index
The latest log index on this node.
applied-index
The last index applied to CDB.
serial-number
The certificate serial number. Used to uniquely identify the node.
-
-
-
-In case you get an error, such as the `Error: NSO can't reach member node 'ncsd@ADDRESS'.`, please verify all of the following:
-
-* The node at the `ADDRESS` is reachable. You can use the `ping` `ADDRESS` command, for example.
-* The problematic node has the correct `ncs.conf` configuration, especially `cluster-name` and `node-address`. The latter should match the `ADDRESS` and should contain at least one dot.
-* Nodes use compatible configuration. For example, make sure the `ncs.crypto_keys` file (if used) or the `encrypted-strings` configuration in `ncs.conf` is identical across nodes.
-* HA Raft is enabled, using the `show ha-raft` command on the unreachable node.
-* The firewall configuration on the OS and on the network level permits traffic on the required ports (see [Network and `ncs.conf` Prerequisites](high-availability.md#ch_ha.raft_ports)).
-* The node uses a certificate that the CA can validate. For example, copy the certificates to the same location and run `openssl verify -CAfile CA_CERT NODE_CERT` to verify this.
-* Verify the `epmd -names` command on each node shows the ncsd process. If not, stop NSO, run `epmd -kill`, and then start NSO again.
-
-In addition to the above, you may also examine the `logs/raft.log` file for detailed information on the error message and overall operation of the Raft algorithm. The amount of information in the file is controlled by the `/ncs-config/logs/raft-log` configuration in the `ncs.conf`.
-
-### Cluster Management
-
-After the initial cluster setup, you can add new nodes or remove existing nodes from the cluster with the help of the `ha-raft adjust-membership` action. For example:
-
-```bash
-admin@ncs# show ha-raft status member
-ha-raft status member [ ash.example.org birch.example.org cedar.example.org ]
-admin@ncs# ha-raft adjust-membership remove-node birch.example.org
-admin@ncs# show ha-raft status member
-ha-raft status member [ ash.example.org cedar.example.org ]
-admin@ncs# ha-raft adjust-membership add-node dollartree.example.org
-admin@ncs# show ha-raft status member
-ha-raft status member [ ash.example.org cedar.example.org dollartree.example.org ]
-```
-
-When removing nodes using the `ha-raft adjust-membership remove-node` command, the removed node is not made aware that it is removed from the cluster and continues signaling the other nodes. This is a limitation in the algorithm, as it must also handle situations, where the removed node is down or unreachable. To prevent further communication with the cluster, it is important you ensure the removed node is shut down. You should shut down the to-be-removed node prior to removal from the cluster, or immediately after it. The former is recommended but the latter is required if there are only two nodes left in the cluster and shutting down prior to removal would prevent the cluster from reaching consensus.
-
-Additionally, you can force an existing follower node to perform a full re-sync from the leader by invoking the `ha-raft reset` action with the `force` option. Using this action on the leader will make the node give up the leader role and perform a sync with the newly elected leader.
-
-As leader selection during the Raft election is not deterministic, NSO provides the `ha-raft handover` action, which allows you to either trigger a new election if called with no arguments or transfer leadership to a specific node. The latter is especially useful when, for example, one of the nodes resides in a different location and more traffic between locations may incur extra costs or additional latency, so you prefer this node is not the leader under normal conditions.
-
-#### Passive Follower
-
-In certain situations, it may be advantageous to have a follower node that cannot be promoted to leader role. Consider a scenario with three Raft-enabled nodes distributed across two different data centers.
-
-In this case, a node located without a peer in the same data center might experience increased latency due to the requirement for acknowledgments from at least one node in the other data center.
-
-To address this, HA Raft provides the `/ncs-config/ha-raft/passive` setting. When this setting is enabled (set to `true`), it prevents the node from assuming the candidate or leader role. A passive follower still participates by voting in leader elections.
-
-Note that the `passive` parameter is local to the node, meaning other nodes in the cluster are unaware that a particular follower is passive. Consequently, it is possible to initiate a handover action targeting the passive node, but the handover will ultimately fail at a later stage, allowing the current leader to retain its position.
-
-### Migrating From Existing Rule-based HA
-
-If you have an existing HA cluster using the rule-based built-in HA, you can migrate it to use HA Raft instead. This procedure is performed in four distinct high-level steps:
-
-* Ensuring the existing cluster meets migration prerequisites.
-* Preparing the required HA Raft configuration files.
-* Switching to HA Raft.
-* Adding additional nodes to the cluster.
-
-The procedure does not perform an NSO version upgrade, so the cluster remains on the same version. It also does not perform any schema upgrades, it only changes the type of the HA cluster.
-
-The migration procedure is in place, that is, the existing nodes are disconnected from the old cluster and connected to the new one. This results in a temporary disruption of the service, so it should be performed during a service window.
-
-First, you should ensure the cluster meets migration prerequisites. The cluster must use:
-
-* NSO 6.1.2 or later
-* tailf-hcc 6.0 or later (if used)
-
-In case these prerequisites are not met, follow the standard upgrade procedures to upgrade the existing cluster to supported versions first.
-
-Additionally, ensure that all used packages are compatible with HA Raft, as NSO uses some new or updated notifications about HA state changes. Also, verify the network supports the new cluster communications (see [Network and `ncs.conf` Prerequisites](high-availability.md#ch_ha.raft_ports)).
-
-Secondly, prepare all the `ncs.conf` and related files for each node, such as certificates and keys. Create a copy of all the `ncs.conf` files and disable or remove the existing `>ha<` section in the copies. Then add the required configuration items to the copies, as described in [Initial Cluster Setup](high-availability.md#ch_ha.raft_setup) and [Node Names and Certificates](high-availability.md#ch_ha.raft_names). Do not update the `ncs.conf` files used by the nodes yet.
-
-It is recommended but not necessary that you set the seed nodes in `ncs.conf` to the designated primary and fail-over primary. Do this for all `ncs.conf` files for all nodes.
-
-#### Procedure 1. Migration to HA Raft
-
-1. With the new configurations at hand and verified, start the switch to HA Raft. The cluster nodes should be in their nominal, designated roles. If not, perform a failover first.
-2. On the designated (actual) primary, called `node1`, enable read-only mode.
-
- ```bash
- admin@node1# high-availability read-only mode true
- ```
-3. Then take a backup of all nodes.
-4. Once the backup successfully completes, stop the designated fail-over primary (actual secondary) NSO process, update its `ncs.conf` and the related (certificate) files for HA Raft, and then start it again. Connect to this node's CLI, here called node2, and verify HA Raft is enabled with the `show` `ha-raft` command.
-
- ```bash
- admin@node2# show ha-raft
- ha-raft status role stalled
- ha-raft status local-node node2.example.org
- > ... output omitted ... <
- ```
-5. Now repeat the same for the designated primary (`node1`). If you have set the seed nodes, you should see the fail-over primary show under `connected-node`.
-
- ```bash
- admin@node1# show ha-raft
- ha-raft status role stalled
- ha-raft status connected-node [ node2.example.org ]
- ha-raft status local-node node1.example.org
- > ... output omitted ... <
- ```
-6. On the old designated primary (node1) invoke the `ha-raft create-cluster` action and create a two-node Raft cluster with the old fail-over primary (`node2`, actual secondary). The action takes a list of nodes identified by their names. If you have configured `seed-nodes`, you will get auto-completion support, otherwise you have to type in the name of the node yourself.
-
- ```bash
- admin@node1# ha-raft create-cluster member [ node2.example.org ]
- admin@node1# show ha-raft
- ha-raft status role leader
- ha-raft status leader node1.example.org
- ha-raft status member [ node1.example.org node2.example.org ]
- ha-raft status connected-node [ node2.example.org ]
- ha-raft status local-node node1.example.org
- > ... output omitted ... <
- ```
-
- In case of errors running the action, refer to [Initial Cluster Setup](high-availability.md#ch_ha.raft_setup) for possible causes and troubleshooting steps.
-7. Raft requires at least three nodes to operate effectively (as described in [NSO HA Raft](high-availability.md#ug.ha.raft)) and currently, there are only two in the cluster. If the initial cluster had only two nodes, you must provision an additional node and set it up for HA Raft. If the cluster initially had three nodes, there is the remaining secondary node, `node3`, which you must stop, update its configuration as you did with the other two nodes, and start it up again.
-8. Finally, on the old designated primary and current HA Raft leader, use the `ha-raft adjust-membership add-node` action to add this third node to the cluster.
-
- ```bash
- admin@node1# ha-raft adjust-membership add-node node3.example.org
- admin@node1# show ha-raft status member
- ha-raft status member [ node1.example.org node2.example.org node3.example.org ]
- ```
-
-### Security Considerations
-
-Communication between the NSO nodes in an HA Raft cluster takes place over Distributed Erlang, an RPC protocol transported over TLS (unless explicitly disabled by setting `/ncs-config/ha-raft/ssl/enabled` to 'false').
-
-TLS (Transport Layer Security) provides Authentication and Privacy by only allowing NSO nodes to connect using certificates and keys issued from the same Certificate Authority (CA). Distributed Erlang is transported over TLS 1.2. Access to a host can be revoked by the CA through the means of a CRL (Certificate Revocation List). To enforce certificate revocation within an HA Raft cluster, invoke the action /ha-raft/disconnect to terminate the pre-existing connection. A connection to the node can re-establish once the node's certificate is valid.
-
-Please ensure the CA key is kept in a safe place since it can be used to generate new certificates and key pairs for peers.
-
-Distributed Erlang supports for multiple NSO nodes to run on the same host and the node addresses are resolved by the `epmd` ([Erlang Port Mapper Daemon](https://www.erlang.org/resources/man/epmd.html)) service. Once resolved, the NSO nodes communicate directly.
-
-The ports `epmd` and the NSO nodes listen to can be found in [Network and `ncs.conf` Prerequisites](high-availability.md#ch_ha.raft_ports). `epmd` binds the wildcard IPv4 address `0.0.0.0` and the IPv6 address `::`.
-
-In case `epmd` is exposed to a DoS attack, the HA Raft members may be unable to resolve addresses and communication could be disrupted. Please ensure traffic on these ports are only accepted between the HA Raft members by using firewall rules or other means.
-
-Two NSO nodes can only establish a connection if a shared secret "cookie" matches. The cookie is optionally configured from `/ncs-config/ha-raft/cluster-name`. Please note the cookie is not a security feature but a way to isolate HA Raft clusters and to avoid accidental misuse.
-
-### Packages Upgrades in Raft Cluster
-
-NSO contains a mechanism for distributing packages to nodes in a Raft cluster, greatly simplifying package management in a highly-available setup.
-
-You perform all package management operations on the current leader node. To identify the leader node, you can use the `show ha-raft status leader` command on a running cluster.
-
-Invoking the `packages reload` command makes the leader node update its currently loaded packages, identical to a non-HA, single-node setup. At the same time, the leader also distributes these packages to the followers to load. However, the load paths on the follower nodes, such as `/var/opt/ncs/packages/`, are not updated. This means, that if a leader election took place, a different leader was elected, and you performed another `packages reload`, the system would try to load the versions of the packages on this other leader, which may be out of date or not even present.
-
-The recommended approach is, therefore, to use the `packages ha sync and-reload` command instead, unless a load path is shared between NSO nodes, such as the same network drive. This command distributes and updates packages in the load paths on the follower nodes, as well as loading them.
-
-For the full procedure, first, ensure all cluster nodes are up and operational, then follow these steps on the leader node:
-
-* Perform a full backup of the NSO instance, such as running `ncs-backup`.
-* Add, replace, or remove packages on the filesystem. The exact location depends on the type of NSO deployment, for example `/var/opt/ncs/packages/`.
-* Invoke the `packages ha sync and-reload` or `packages ha sync and-add` command to start the upgrade process.
-
-Note that while the upgrade is in progress, writes to the CDB are not allowed and will be rejected.
-
-For a `packages ha sync and-reload` example see the `raft-upgrade-l2` NSO system installation-based example referenced by the [examples.ncs/high-availability/hcc](https://github.com/NSO-developer/nso-examples/tree/6.6/high-availability/hcc) example in the NSO example set.
-
-For more details, troubleshooting, and general upgrade recommendations, see [NSO Packages](package-mgmt.md) and [Upgrade](../installation-and-deployment/upgrade-nso.md).
-
-### Version Upgrade of Cluster Nodes
-
-Currently, the only supported and safe way of upgrading the Raft HA cluster NSO version requires that the cluster be taken offline since the nodes must, at all times, run the same software version.
-
-Do not attempt an upgrade unless all cluster member nodes are up and actively participating in the cluster. Verify the current cluster state with the `show ha-raft status` command. All member nodes must also be present in the connected-node list.
-
-The procedure differentiates between the current leader node versus followers. To identify the leader, you can use the `show ha-raft status leader` command on a running cluster.
-
-**Procedure 2. Cluster Version Upgrade**
-
-1. On the leader, first enable read-only mode using the `ha-raft read-only mode true` command and then verify that all cluster nodes are in sync with the `show ha-raft status log replications state` command.
-2. Before embarking on the upgrade procedure, it's imperative to backup each node. This ensures that you have a safety net in case of any unforeseen issues. For example, you can use the `$NCS_DIR/bin/ncs-backup` command.
-3. Delete the `$NCS_RUN_DIR/cdb/compact.lock` file and compact the CDB write log on all nodes using, for example, the `$NCS_DIR/bin/ncs --cdb-compact $NCS_RUN_DIR/cdb` command.
-4. On all nodes, delete the `$NCS_RUN_DIR/state/raft/` directory with a command such as `rm -rf $NCS_RUN_DIR/state/raft/`.
-5. Stop NSO on all the follower nodes, for example, invoking the `$NCS_DIR/bin/ncs --stop` or `systemctl stop ncs` command on each node.
-6. Stop NSO on the leader node only after you have stopped all the follower nodes in the previous step. Alternatively NSO can be stopped on the nodes before deleting the HA Raft state and compacting the CDB write log without needing to delete the `compact.lock` file.
-7. Upgrade the NSO packages on the leader to support the new NSO version.
-8. Install the new NSO version on all nodes.
-9. Start NSO on all nodes.
-10. Re-initialize the HA cluster using the `ha-raft create-cluster` action on the node to become the leader.
-11. Finally, verify the cluster's state through the `show ha-raft status` command. Ensure that all data has been correctly synchronized across all cluster nodes and that the leader is no longer read-only. The latter happens automatically after re-initializing the HA cluster.
-
-For a standard System Install, the single-node procedure is described in [Single Instance Upgrade](../installation-and-deployment/upgrade-nso.md#ug.admin_guide.manual_upgrade), but in general depends on the NSO deployment type. For example, it will be different for containerized environments. For specifics, please refer to the documentation for the deployment type.
-
-For an example see the `raft-upgrade-l2` NSO system installation-based example referenced by the [examples.ncs/high-availability/hcc](https://github.com/NSO-developer/nso-examples/tree/6.6/high-availability/hcc) example in the NSO example set.
-
-If the upgrade fails before or during the upgrade of the original leader, start up the original followers to restore service and then restore the original leader, using backup as necessary.
-
-However, if the upgrade fails after the original leader was successfully upgraded, you should still be able to complete the cluster upgrade. If you are unable to upgrade a follower node, you may provision a (fresh) replacement and the data and packages in use will be copied from the leader.
-
-## NSO Rule-based HA
-
-NSO can manage the HA groups based on a set of predefined rules. This functionality was added in NSO 5.4 and is sometimes referred to simply as the built-in HA. However, since NSO 6.1, HA Raft (which is also built-in) is available as well, and is likely a better choice in most situations.
-
-Rule-based HA allows administrators to:
-
-* Configure HA group members with IP addresses and default roles
-* Configure failover behavior
-* Configure start-up behavior
-* Configure HA group members with IP addresses and default roles
-* Assign roles, join HA group, enable/disable rule-based HA through actions
-* View the state of the current HA setup
-
-NSO rule-based HA is defined in `tailf-ncs-high-availability.yang`, with data residing under the `/high-availability/` container.
-
-{% hint style="info" %}
-In environments with high NETCONF traffic, particularly when using `ncs_device_notifs`, it's recommended to enable read-only mode on the designated primary node before performing HA activation or sync. This prevents `app_sync` from being blocked by notification processing.
-
-Use the following command prior to enabling HA or assigning roles:
-
-```bash
-admin@ncs# high-availability read-only mode true
-```
-
-After successful sync and HA establishment, disable read-only mode:
-
-```bash
-admin@ncs# high-availability read-only mode false
-```
-{% endhint %}
-
-NSO rule-based HA does not manage any virtual IP addresses, or advertise any BGP routes or similar. This must be handled by an external package. Tail-f HCC 5.x and greater has this functionality compatible with NSO rule-based HA. You can read more about the HCC package in the [following chapter](high-availability.md#ug.ha.hcc).
-
-### Prerequisites
-
-To use NSO rule-based HA, HA must first be enabled in `ncs.conf` - See [Mode of Operation](high-availability.md#ha.moo).
-
-{% hint style="info" %}
-If the package tailf-hcc with a version less than 5.0 is loaded, NSO rule-based HA will not function. These HCC versions may still be used but NSO built-in HA will not function in parallel.
-{% endhint %}
-
-### HA Member Configuration
-
-All HA group members are defined under `/high-availability/ha-node`. Each configured node must have a unique IP address configured and a unique HA ID. Additionally, nominal roles and fail-over settings may be configured on a per-node basis.
-
-The HA Node ID is a unique identifier used to identify NSO instances in an HA group. The HA ID of the local node - relevant amongst others when an action is called - is determined by matching configured HA node IP addresses against IP addresses assigned to the host machine of the NSO instance. As the HA ID is crucial to NSO HA, NSO rule-based HA will not function if the local node cannot be identified.
-
-To join a HA group, a shared secret must be configured on the active primary and any prospective secondary. This is used for a CHAP-2-like authentication and is specified under `/high-availability/token/`.
-
-{% hint style="info" %}
-In an NSO System Install setup, not only does the shared token need to match between the HA group nodes but the configuration for encrypted strings, default stored in `/etc/ncs/ncs.crypto_keys`, need also to match between the nodes in the HA group.
-{% endhint %}
-
-The token configured on the secondary node is overwritten with the encrypted token of type `aes-256-cfb-128-encrypted-string` from the primary node when the secondary node connects to the primary. If there is a mismatch between the encrypted-string configuration on the nodes, NSO will not decrypt the HA token to match the token presented. As a result, the primary node denies the secondary node access the next time the HA connection needs to reestablish with a "Token mismatch, secondary is not allowed" error.
-
-See the `upgrade-l2` example, referenced from [examples.ncs/high-availability/hcc](https://github.com/NSO-developer/nso-examples/tree/6.6/high-availability/hcc), for an example setup and the [Deployment Example](../installation-and-deployment/deployment/deployment-example.md) for a description of the example.
-
-Also, note that the `ncs.crypto_keys` file is highly sensitive. The file contains the encryption keys for all CDB data that is encrypted on disk. Besides the HA token, this often includes passwords for various entities, such as login credentials to managed devices.
-
-### HA Roles
-
-NSO can assume HA roles `primary`, `secondary` and `none`. Roles can be assigned directly through actions, or at startup or failover. See [HA Framework Requirements](high-availability.md#ferret) for the definition of these roles.
-
-{% hint style="info" %}
-NSO rule-based HA does not support relay-secondaries.
-{% endhint %}
-
-NSO rule-based HA distinguishes between the concepts of nominal role and assigned role. Nominal-role is configuration data that applies when an NSO instance starts up and at failover. The assigned role is the role that the NSO instance has been ordered to assume either by an action or as a result of startup or failover.
-
-### Failover
-
-Failover may occur when a secondary node loses the connection to the primary node. A secondary may then take over the primary role. Failover behavior is configurable and controlled by the parameters:
-
-* `/high-availability/ha-node{id}/failover-primary`
-* `/high-availability/settings/enable-failover`
-
-For automatic failover to function, `/high-availability/settings/enable-failover` must be se to `true`. It is then possible to enable at most one node with a nominal role secondary as failover-primary, by setting the parameter `/high-availability/ha-node{id}/failover-primary`. The failover works in both directions; if a nominal primary is currently connected to the failover-primary as a secondary and loses the connection, then it will attempt to take over as a primary.
-
-Before failover happens, a failover-primary-enabled secondary node may attempt to reconnect to the previous primary before assuming the primary role. This behavior is configured by the parameters denoting how many reconnect attempts will be made, and with which interval, respectively.
-
-* `/high-availability/settings/reconnect-attempts`
-* `/high-availability/settings/reconnect-interval`
-
-HA Members that are assigned as secondaries, but are neither failover-primaries nor set with a nominal-role primary, may attempt to rejoin the HA group after losing connection to the primary.
-
-This is controlled by `/high-availability/settings/reconnect-secondaries`. If this is true, secondary nodes will query the nodes configured under `/high-availability/ha-node` for an NSO instance that currently has the primary role. Any configured nominal roles will not be considered. If no primary node is found, subsequent attempts to rejoin the HA setup will be issued with an interval defined by `/high-availability/settings/reconnect-interval`.
-
-In case a net-split provokes a failover it is possible to end up in a situation with two primaries, both nodes accepting writes. The primaries are then not synchronized and will end up in a split brain. Once one of the primaries joins the other as a secondary, the HA cluster is once again consistent but any out-of-sync changes will be overwritten.
-
-To prevent split-brain from occurring, NSO 5.7 or later comes with a rule-based algorithm. The algorithm is enabled by default, it can be disabled or changed from the parameters:
-
-* `/high-availability/settings/consensus/enabled [true]`
-* `/high-availability/settings/consensus/algorithm [ncs:rule-based]`
-
-The rule-based algorithm can be used in either of the two HA constellations:
-
-* Two nodes: one nominal primary and one nominal secondary configured as failover-primary.
-* Three nodes: one nominal primary, one nominal secondary configured as failover-primary, and one perpetual secondary.
-
-On failover:
-
-* Failover-primary: become primary but enable read-only mode. Once the secondary joins, disable read-only.
-* Nominal primary: on loss of all secondaries, change role to none. If one secondary node is connected, stay primary.
-
-{% hint style="info" %}
-In certain cases, the rule-based consensus algorithm results in nodes being disconnected and will not automatically rejoin the HA cluster, such as in the example above when the nominal primary becomes none on the loss of all secondaries.
-{% endhint %}
-
-To restore the HA cluster one may need to manually invoke the `/high-availability/be-secondary-to` action.
-
-{% hint style="info" %}
-In the case where the failover-primary takes over as primary, it will enable read-only mode, if no secondary connects it will remain read-only. This is done to guarantee consistency.
-{% endhint %}
-
-{% hint style="info" %}
-In a three-node cluster, when the nominal primary takes over as actual primary, it first enables read-only mode and stays in read-only mode until a secondary connects. This is done to guarantee consistency.
-{% endhint %}
-
-The read-write mode can manually be enabled from the `/high-availability/read-only` action with the parameter mode passed with value false.
-
-When any node loses connection, this can also be observed in high-availability alarms as either a `ha-primary-down` or a `ha-secondary-down` alarm.
-
-```bash
-alarms alarm-list alarm ncs ha-primary-down /high-availability/ha-node[id='paris']
- is-cleared false
- last-status-change 2022-05-30T10:02:45.706947+00:00
- last-perceived-severity critical
- last-alarm-text "Lost connection to primary due to: Primary closed connection"
- status-change 2022-05-30T10:02:45.706947+00:00
- received-time 2022-05-30T10:02:45.706947+00:00
- perceived-severity critical
- alarm-text "Lost connection to primary due to: Primary closed connection"
-```
-
-```bash
-alarms alarm-list alarm ncs ha-secondary-down /high-availability/ha-node[id='london'] ""
- is-cleared false
- last-status-change 2022-05-30T10:04:33.231808+00:00
- last-perceived-severity critical
- last-alarm-text "Lost connection to secondary"
- status-change 2022-05-30T10:04:33.231808+00:00
- received-time 2022-05-30T10:04:33.231808+00:00
- perceived-severity critical
- alarm-text "Lost connection to secondary"
-```
-
-### Startup
-
-Startup behavior is defined by a combination of the parameters `/high-availability/settings/start-up/assume-nominal-role` and `/high-availability/settings/start-up/join-ha` as well as the node's nominal role:
-
-
assume-nominal-role
join-ha
nominal-role
Behaviour
true
false
primary
Assume primary role.
true
false
secondary
Attempt to connect as secondary to the node (if any), which has nominal-role primary. If this fails, make no retry attempts and assume none role.
true
false
none
Assume none role
false
true
primary
Attempt to join HA setup as secondary by querying for the current primary. Retries will be attempted. Retry attempt interval is defined by /high-availability/settings/reconnect-interval.
false
true
secondary
Attempt to join HA setup as secondary by querying for the current primary. Retries will be attempted. Retry attempt interval is defined by /high-availability/settings/reconnect-interval. If all retry attempts fail, assume none role.
false
true
none
Assume none role.
true
true
primary
Query HA setup once for a node with primary role. If found, attempt to connect as secondary to that node. If no current primary is found, assume primary role.
true
true
secondary
Attempt to join HA setup as secondary by querying for the current primary. Retries will be attempted. Retry attempt interval is defined by /high-availability/settings/reconnect-interval. If all retry attempts fail, assume none role.
true
true
none
Assume none role.
false
false
-
Assume none role.
-
-### Actions
-
-NSO rule-based HA can be controlled through several actions. All actions are found under `/high-availability/`. The available actions are listed below:
-
-
Action
Description
be-primary
Order the local node to assume the HA role primary.
be-none
Order the local node to assume the HA role none.
be-secondary-to
Order the local node to connect as secondary to the provided HA node. This is an asynchronous operation; the result can be found under /high-availability/status/be-secondary-result.
local-node-id
Identify which of the nodes in /high-availability/ha-node (if any) corresponds to the local NSO instance.
enable
Enable NSO rule-based HA and optionally assume an HA role according to /high-availability/settings/start-up/ parameters.
disable
Disable NSO rule-based HA and assume a HA role none.
-
-### Status Check
-
-The current state of NSO rule-based HA can be monitored by observing `/high-availability/status/`. Information can be found about the current active HA mode and the current assigned role. For nodes with active mode primary, a list of connected nodes and their source IP addresses is shown. For nodes with assigned role secondary the latest result of the be-secondary operation is listed. All NSO rule-based HA status information is non-replicated operational data - the result here will differ between nodes connected in an HA setup.
-
-## Tail-f HCC Package
-
-The Tail-f HCC package extends the built-in HA functionality by providing virtual IP addresses (VIPs) that can be used to connect to the NSO HA group primary node. HCC ensures that the VIP addresses are always bound by the HA group primary and never bound by a secondary. Each time a node transitions between primary and secondary states HCC reacts by binding (primary) or unbinding (secondary) the VIP addresses.
-
-HCC manages IP addresses at the link layer (OSI layer 2) for Ethernet interfaces, and optionally, also at the network layer (OSI layer 3) using BGP router advertisements. The layer-2 and layer-3 functions are mostly independent and this document describes the details of each one separately. However, the layer-3 function builds on top of the layer-2 function. The layer-2 function is always necessary, otherwise, the Linux kernel on the primary node would not recognize the VIP address or accept traffic directed to it.
-
-{% hint style="info" %}
-Tail-f HCC version 5.x is non-backward compatible with previous versions of Tail-f HCC and requires functionality provided by NSO version 5.4 and greater. For more details, see the [following chapter](high-availability.md#ug.ha.hcc.compared).
-{% endhint %}
-
-### Dependencies
-
-Both the HCC layer-2 VIP and layer-3 BGP functionality depend on `iproute2` utilities and `awk`. An optional dependency is `arping` (either from `iputils` or Thomas Habets `arping` implementation) which allows HCC to announce the VIP to MAC mapping to all nodes in the network by sending gratuitous ARP requests.
-
-The HCC layer-3 BGP functionality depends on the [`GoBGP`](https://osrg.github.io/gobgp/) daemon version 2.x being installed on each NSO host that is configured to run HCC in BGP mode.
-
-GoBGP is open-source software originally developed by NTT Communications and released under the Apache License 2.0. GoBGP can be obtained directly from https://osrg.github.io/gobgp/ and is also packaged for mainstream Linux distributions.
-
-The HCC layer-3 DNS Update functionality depends on the command line utility `nsupdate`.
-
-Tools Dependencies are listed below:
-
-
Tool
Package
Required
Description
ip
iproute2
yes
Adds and deletes the virtual IP from the network interface.
awk
mawk or gawk
yes
Installed with most Linux distributions.
sed
sed
yes
Installed with most Linux distributions.
arping
iputils or arping
optional
Installation recommended. Will reduce the propagation of changes to the virtual IP for layer-2 configurations.
gobgpd and gobgp
GoBGP 2.x
optional
Required for layer-3 configurations. gobgpd is started by the HCC package and advertises the virtual IP using BGP. gobgp is used to get advertised routes.
nsupdate
bind-tools or knot-dnsutils
optional
Required for layer-3 DNS update functionality and is used to submit Dynamic DNS Update requests to a name server.
-
-Same as with built-in HA functionality, all NSO instances must be configured to run in HA mode. See the [following instructions](high-availability.md#ha.moo) on how to enable HA on NSO instances.
-
-### Running the HCC Package with NSO as a Non-Root User
-
-GoBGP uses TCP port 179 for its communications and binds to it at startup. As port 179 is considered a privileged port it is normally required to run gobgpd as root.
-
-When NSO is running as a non-root user the gobgpd command will be executed as the same user as NSO and will prevent gobgpd from binding to port 179.
-
-There a multiple ways of handling this and two are listed here.
-
-1. Set capability `CAP_NET_BIND_SERVICE` on the `gobgpd` file. May not be supported by all Linux distributions.
-
- ```bash
- $ sudo setcap 'cap_net_bind_service=+ep' /usr/bin/gobgpd
- ```
-2. Set the owner to `root` and the `setuid` bit of the `gobgpd` file. Works on all Linux distributions.
-
- ```bash
- $ sudo chown root /usr/bin/gobgpd
- $ sudo chmod u+s /usr/bin/gobgpd
- ```
-3. The `vipctl` script, included in the HCC package, uses `sudo` to run the `ip` and `arping` commands when NSO is not running as root. If `sudo` is used, you must ensure it does not require password input. For example, if NSO runs as `admin` user, the `sudoers` file can be edited similarly to the following:
-
- ```bash
- $ sudo echo "admin ALL = (root) NOPASSWD: /bin/ip" >> /etc/sudoers
- $ sudo echo "admin ALL = (root) NOPASSWD: /path/to/arping" >> /etc/sudoers
- ```
-
-### Tail-f HCC Compared with HCC Version 4.x and Older
-
-#### **HA Group Management Decisions**
-
-Tail-f HCC 5.x or later does not participate in decisions on which NSO node is primary or secondary. These decisions are taken by NSO's built-in HA and then pushed as notifications to HCC. The NSO built-in HA functionality is available in NSO starting with version 5.4, where older NSO versions are not compatible with the HCC 5.x or later.
-
-#### **Embedded BGP Daemon**
-
-HCC 5.x or later operates a GoBGP daemon as a subprocess completely managed by NSO. The old HCC function pack interacted with an external Quagga BGP daemon using a NED interface.
-
-#### **Automatic Interface Assignment**
-
-HCC 5.x or later automatically associates VIP addresses with Linux network interfaces using the `ip` utility from the iproute2 package. VIP addresses are also treated as `/32` without defining a new subnet. The old HCC function pack used explicit configuration to associate VIPs with existing addresses on each NSO host and define IP subnets for VIP addresses.
-
-### Upgrading
-
-Since version 5.0, HCC relies on the NSO built-in HA for cluster management and only performs address or route management in reaction to cluster changes. Therefore, no special measures are necessary if using HCC when performing an NSO version upgrade or a package upgrade. Instead, you should follow the standard best practice HA upgrade procedure from [NSO HA Version Upgrade](../installation-and-deployment/upgrade-nso.md#ch_upgrade.ha).
-
-A reference to upgrade examples can be found in the README under [examples.ncs/high-availability/hcc](https://github.com/NSO-developer/nso-examples/tree/6.6/high-availability/hcc).
-
-### Layer-2
-
-The purpose of the HCC layer-2 functionality is to ensure that the configured VIP addresses are bound in the Linux kernel of the NSO primary node only. This ensures that the primary node (and only the primary node) will accept traffic directed toward the VIP addresses.
-
-HCC also notifies the local layer-2 network when VIP addresses are bound by sending Gratuitous ARP (GARP) packets. Upon receiving the Gratuitous ARP, all the nodes in the network update their ARP tables with the new mapping so they can continue to send traffic to the non-failed, now primary node.
-
-#### **Operational Details**
-
-HCC binds the VIP addresses as additional (alias) addresses on existing Linux network interfaces (e.g. `eth0`). The network interface for each VIP is chosen automatically by performing a kernel routing lookup on the VIP address. That is, the VIP will automatically be associated with the same network interface that the Linux kernel chooses to send traffic to the VIP.
-
-This means that you can map each VIP onto a particular interface by defining a route for a subnet that includes the VIP. If no such specific route exists the VIP will automatically be mapped onto the interface of the default gateway.
-
-{% hint style="info" %}
-To check which interface HCC will choose for a particular VIP address, simply run for example and look at the device `dev` in the output, for example `eth0`:
-
-```bash
-admin@paris:~$ ip route get 192.168.123.22
-```
-{% endhint %}
-
-#### **Configuration**
-
-The layer-2 functionality is configured by providing a list of IPv4 and/or IPv6 VIP addresses and enabling HCC. The VIP configuration parameters are found under `/hcc:hcc`.
-
-Global Layer-2 Configuration:
-
-
Parameters
Type
Description
enabled
boolean
If set to 'true', the primary node in an HA group automatically binds the set of Virtual IPv[46] addresses.
vip-address
list of inet:ip-address
The list of virtual IPv[46] addresses to bind on the primary node. The addresses are automatically unbound when a node becomes secondary. The addresses can therefore be used externally to reliably connect to the HA group primary node.
-
-#### **Example Configuration**
-
-```bash
-admin@ncs(config)# hcc enabled
-admin@ncs(config)# hcc vip 192.168.123.22
-admin@ncs(config)# hcc vip 2001:db8::10
-admin@ncs(config)# commit
-```
-
-### Layer-3 BGP
-
-The purpose of the HCC layer-3 BGP functionality is to operate a BGP daemon on each NSO node and to ensure that routes for the VIP addresses are advertised by the BGP daemon on the primary node only.
-
-The layer-3 functionality is an optional add-on to the layer-2 functionality. When enabled, the set of BGP neighbors must be configured separately for each NSO node. Each NSO node operates an embedded BGP daemon and maintains connections to peers but only the primary node announces the VIP addresses.
-
-The layer-3 functionality relies on the layer-2 functionality to assign the virtual IP addresses to one of the host's interfaces. One notable difference in assigning virtual IP addresses when operating in Layer-3 mode is that the virtual IP addresses are assigned to the loopback interface `lo` rather than to a specific physical interface.
-
-#### **Operational Details**
-
-HCC operates a [`GoBGP`](https://osrg.github.io/gobgp/) subprocess as an embedded BGP daemon. The BGP daemon is started, configured, and monitored by HCC. The HCC YANG model includes basic BGP configuration data and state data.
-
-Operational data in the YANG model includes the state of the BGP daemon subprocess and the state of each BGP neighbor connection. The BGP daemon writes log messages directly to NSO where the HCC module extracts updated operational data and then repeats the BGP daemon log messages into the HCC log verbatim. You can find these log messages in the developer log (`devel.log`).
-
-```bash
-admin@ncs# show hcc
-NODE BGPD BGPD
-ID PID STATUS ADDRESS STATE CONNECTED
--------------------------------------------------------------
-london - - 192.168.30.2 - -
-paris 827 running 192.168.31.2 ESTABLISHED true
-```
-
-{% hint style="info" %}
-GoBGP must be installed separately. The `gobgp` and `gobgpd` binaries must be found in paths specified by the `$PATH` environment variable. For system install, NSO reads `$PATH` in the `systemd` service file `/etc/systemd/system/ncs.service`. Since tailf-hcc 6.0.2, the path to `gobgp`/`gobgpd` is no longer possible to specify from the configuration data leaf `/hcc/bgp/node/gobgp-bin-dir`. The leaf has been removed from the `tailf-hcc/src/yang/tailf-hcc.yang` module.
-
-Upgrades: If BGP is enabled and the `gobgp` or `gobgpd` binaries are not found, the tailf-hcc package will fail to load. The user must then install GoBGP and invoke the `packages reload` action or restart NSO with `NCS_RELOAD_PACKAGES=true` in `/etc/ncs/ncs.systemd.conf` and `systemctl restart ncs`.
-{% endhint %}
-
-#### **Configuration**
-
-The layer-3 BGP functionality is configured as a list of BGP configurations with one list entry per node. Configurations are separate because each NSO node usually has different BGP neighbors with their own IP addresses, authentication parameters, etc.
-
-The BGP configuration parameters are found under `/hcc:hcc/bgp/node{id}`.
-
-Per-Node Layer-3 Configuration:
-
-
Parameters
Type
Description
node-id
string
Unique node ID. A reference to /ncs:high-availability/ha-node/id.
enabled
boolean
If set to true, this node uses BGP to announce VIP addresses when in the HA primary state.
as
inet:as-number
The BGP Autonomous System Number for the local BGP daemon.
router-id
inet:ip-address
The router ID for the local BGP daemon.
-
-Each NSO node can connect to a different set of BGP neighbors. For each node, the BGP neighbor list configuration parameters are found under `/hcc:hcc/bgp/node{id}/neighbor{address}`.
-
-Per-Neighbor BGP Configuration:
-
-
Parameters
Type
Description
address
inet:ip-address
BGP neighbor IP address.
as
inet:as-number
BGP neighbor Autonomous System Number.
ttl-min
uint8
Optional minimum TTL value for BGP packets. When configured, enables BGP Generalized TTL Security Mechanism (GTSM).
password
string
Optional password to use for BGP authentication with this neighbor.
enabled
boolean
If set to true, then an outgoing BGP connection to this neighbor is established by the HA group primary node.
-
-#### **Example**
-
-```bash
-admin@ncs(config)# hcc bgp node paris enabled
-admin@ncs(config)# hcc bgp node paris as 64512
-admin@ncs(config)# hcc bgp node paris router-id 192.168.31.99
-admin@ncs(config)# hcc bgp node paris neighbor 192.168.31.2 as 64514
-admin@ncs(config)# ... repeated for each neighbor if more than one ...
- ... repeated for each node ...
-admin@ncs(config)# commit
-```
-
-### Layer-3 DNS Update
-
-The purpose of the HCC layer-3 DNS Update functionality is to notify a DNS server of the IP address change of the active primary NSO server, allowing the DNS server to update the DNS record for the given domain name.
-
-Geographically redundant NSO setup typically relies on DNS support. To enable this use case, tailf-hcc can dynamically update DNS with the `nsupdate` utility on HA status change notification.
-
-The DNS server used should support updates through `nsupdate` command (RFC 2136).
-
-#### Operational Details
-
-HCC listens on the underlying NSO HA notifications stream. When HCC receives a notification about an NSO node being Primary, it updates the DNS Server with the IP address of the Primary NSO for the given hostname. The HCC YANG model includes basic DNS configuration data and operational status data.
-
-Operational data in the YANG model includes the result of the latest DNS update operation.
-
-```bash
-admin@ncs# show hcc dns
-hcc dns status time 2023-10-20T23:16:33.472522+00:00
-hcc dns status exit-code 0
-```
-
-If the DNS Update is unsuccessful, an error message will be populated in operational data, for example:
-
-```bash
-admin@ncs# show hcc dns
-hcc dns status time 2023-10-20T23:36:33.372631+00:00
-hcc dns status exit-code 2
-hcc dns status error-message "; Communication with 10.0.0.10#53 failed: timed out"
-```
-
-{% hint style="info" %}
-The DNS Server must be installed and configured separately, and details are provided to HCC as configuration data. The DNS Server must be configured to update the reverse DNS record.
-{% endhint %}
-
-#### Configuration
-
-The layer-3 DNS Update functionality needs DNS-related information like DNS server IP address, port, zone, etc, and information about NSO nodes involved in HA - node, ip, and location.
-
-The DNS configuration parameters are found under `/hcc:hcc/dns`.
-
-Layer-3 DNS Configuration:
-
-
Parameters
Type
Description
enabled
boolean
If set to true, DNS updates will be enabled.
fqdn
inet:domain-name
DNS domain-name for the HA primary.
ttl
uint32
Time to live for DNS record, default 86400.
key-file
string
Specifies the file path for nsupdate keyfile.
server
inet:ip-address
DNS Server IP Address.
port
uint32
DNS Server port, default 53.
zone
inet:host
DNS Zone to update on the server.
timeout
uint32
Timeout for nsupdate command, default 300.
-
-Each NSO node can be placed in a separate Location/Site/Availability-Zone. This is configured as a list member configuration, with one list entry per node ID. The member list configuration parameters are found under `/hcc:hcc/dns/member{node-id}`.
-
-
Parameter
Type
Description
node-id
string
Unique NSO HA node ID. Valid values are: /high-availability/ha-node when built-in HA is used or /ha-raft/status/member for HA Raft.
ip-address
inet:ip-address
IP where NSO listens for incoming requests to any northbound interfaces.
location
string
Name of the Location/Site/Availability-Zone where node is placed.
-
-#### Example
-
-Here is an example configuration for a setup of two dual-stack NSO nodes, node-1 and node-2, that have an IPv4 and an IPv6 address configured. The configuration also sets up an update signing with the specified key.
-
-```bash
-admin@ncs(config)# hcc dns enabled
-admin@ncs(config)# hcc dns fqdn example.com
-admin@ncs(config)# hcc dns ttl 120
-admin@ncs(config)# hcc dns key-file /home/cisco/DNS-testing/good.key
-admin@ncs(config)# hcc dns server 10.0.0.10
-admin@ncs(config)# hcc dns port 53
-admin@ncs(config)# hcc dns zone zone1.nso
-admin@ncs(config)# hcc dns member node-1 ip-address [ 10.0.0.20 ::10 ]
-admin@ncs(config)# hcc dns member node-1 location SanJose
-admin@ncs(config)# hcc dns member node-2 ip-address [ 10.0.0.30 ::20 ]
-admin@ncs(config)# hcc dns member node-2 location NewYork
-admin@ncs(config)# commit
-```
-
-### Usage
-
-This section describes basic deployment scenarios for HCC. Layer-2 mode is demonstrated first and then the layer-3 BGP functionality is configured in addition:
-
-* [Layer-2 Deployment](high-availability.md#layer-2-deployment)
-* [Enabling Layer-3 BGP](high-availability.md#enabling-layer-3-bgp)
-* [Enabling Layer-3 DNS](high-availability.md#enabling-layer-3-dns)
-
-A reference to container-based examples for the layer-2 and layer-3 deployment scenarios described here can be found in the NSO example set under [examples.ncs/high-availability/hcc](https://github.com/NSO-developer/nso-examples/tree/6.6/high-availability/hcc).
-
-Both scenarios consist of two test nodes: `london` and `paris` with a single IPv4 VIP address. For the layer-2 scenario, the nodes are on the same network. The layer-3 scenario also involves a BGP-enabled `router` node as the `london` and `paris` nodes are on two different networks.
-
-#### **Layer-2 Deployment**
-
-The layer-2 operation is configured by simply defining the VIP addresses and enabling HCC. The HCC configuration on both nodes should match, otherwise, the primary node's configuration will overwrite the secondary node configuration when the secondary connects to the primary node.
-
-Addresses:
-
-
Hostname
Address
Role
paris
192.168.23.99
Paris service node.
london
192.168.23.98
London service node.
vip4
192.168.23.122
NSO primary node IPv4 VIP address.
-
-Configuring VIPs:
-
-```bash
-admin@ncs(config)# hcc enabled
-admin@ncs(config)# hcc vip 192.168.23.122
-admin@ncs(config)# commit
-```
-
-Verifying VIP Availability:
-
-Once enabled, HCC on the HA group primary node will automatically assign the VIP addresses to corresponding Linux network interfaces.
-
-```bash
-root@paris:/var/log/ncs# ip address list
-1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
- link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
- inet 127.0.0.1/8 scope host lo
- valid_lft forever preferred_lft forever
- inet6 ::1/128 scope host
- valid_lft forever preferred_lft forever
-2: enp0s3: mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
- link/ether 52:54:00:fa:61:99 brd ff:ff:ff:ff:ff:ff
- inet 192.168.23.99/24 brd 192.168.23.255 scope global enp0s3
- valid_lft forever preferred_lft forever
- inet 192.168.23.122/32 scope global enp0s3
- valid_lft forever preferred_lft forever
- inet6 fe80::5054:ff:fefa:6199/64 scope link
- valid_lft forever preferred_lft forever
-```
-
-On the secondary node, HCC will not configure these addresses.
-
-```bash
-root@london:~# ip address list
-1: lo: mtu 65536 ...
- link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
- inet 127.0.0.1/8 scope host lo
- valid_lft forever preferred_lft forever
- inet6 ::1/128 scope host
- valid_lft forever preferred_lft forever
-2: enp0s3: mtu 1500 ...
- link/ether 52:54:00:fa:61:98 brd ff:ff:ff:ff:ff:ff
- inet 192.168.23.98/24 brd 192.168.23.255 scope global enp0s3
- valid_lft forever preferred_lft forever
- inet6 fe80::5054:ff:fefa:6198/64 scope link
- valid_lft forever preferred_lft forever
-```
-
-Layer-2 Example Implementation:
-
-A reference to a container-based example of the layer-2 scenario can be found in the NSO example set. See the [examples.ncs/high-availability/hcc](https://github.com/NSO-developer/nso-examples/tree/6.6/high-availability/hcc) `README`.
-
-#### **Enabling Layer-3 BGP**
-
-Layer-3 operation is configured for each NSO HA group node separately. The HCC configuration on both nodes should match, otherwise, the primary node's configuration will overwrite the configuration on the secondary node.
-
-Addresses:
-
-
Hostname
Address
AS
Role
paris
192.168.31.99
64512
Paris node
london
192.168.30.98
64513
London node
router
192.168.30.2
192.168.31.2
64514
BGP-enabled router
vip4
192.168.23.122
Primary node IPv4 VIP address
-
-Configuring BGP for Paris Node:
-
-```bash
-admin@ncs(config)# hcc bgp node paris enabled
-admin@ncs(config)# hcc bgp node paris as 64512
-admin@ncs(config)# hcc bgp node paris router-id 192.168.31.99
-admin@ncs(config)# hcc bgp node paris neighbor 192.168.31.2 as 64514
-admin@ncs(config)# commit
-```
-
-Configuring BGP for London Node:
-
-```bash
-admin@ncs(config)# hcc bgp node london enabled
-admin@ncs(config)# hcc bgp node london as 64513
-admin@ncs(config)# hcc bgp node london router-id 192.168.30.98
-admin@ncs(config)# hcc bgp node london neighbor 192.168.30.2 as 64514
-admin@ncs(config)# commit
-```
-
-Check BGP Neighbor Connectivity:
-
-Check neighbor connectivity on the `paris` primary node. Note that its connection to neighbor 192.168.31.2 (`router`) is `ESTABLISHED`.
-
-```bash
-admin@ncs# show hcc
- BGPD BGPD
-NODE ID PID STATUS ADDRESS STATE CONNECTED
-----------------------------------------------------------------
-london - - 192.168.30.2 - -
-paris 2486 running 192.168.31.2 ESTABLISHED true
-```
-
-Check neighbor connectivity on the `london` secondary node. Note that the primary node also has an `ESTABLISHED` connection to its neighbor 192.168.30.2 (`router`). The primary and secondary nodes both maintain their BGP neighbor connections at all times when BGP is enabled, but only the primary node announces routes for the VIPs.
-
-```bash
-admin@ncs# show hcc
- BGPD BGPD
-NODE ID PID STATUS ADDRESS STATE CONNECTED
-----------------------------------------------------------------
-london 494 running 192.168.30.2 ESTABLISHED true
-paris - - 192.168.31.2 - -
-```
-
-Check Advertised BGP Routes Neighbors:
-
-Check the BGP routes received by the `router`.
-
-```bash
-admin@ncs# show ip bgp
-...
-Network Next Hop Metric LocPrf Weight Path
-*> 192.168.23.122/32
- 192.168.31.99 0 64513 ?
-```
-
-The VIP subnet is routed to the `paris` host, which is the primary node.
-
-Layer-3 BGP Example Implementation:
-
-A reference to a container-based example of the combined layer-2 and layer-3 BGP scenario can be found in the NSO example set. See the [examples.ncs/high-availability/hcc](https://github.com/NSO-developer/nso-examples/tree/6.6/high-availability/hcc) `README`.
-
-#### **Enabling Layer-3 DNS**
-
-If enabled prior to the HA being established, HCC will update the DNS server with the IP address of the Primary node once a primary is selected.
-
-If an HA is already operational, and Layer-3 DNS is enabled and configured afterward, HCC will not update the DNS server automatically. An automatic DNS server update will only happen if a HA switchover happens. HCC exposes an update action to manually trigger an update to the DNS server with the IP address of the primary node.
-
-DNS Update Action:
-
-The user can explicitly update DNS from the specific NSO node by running the update action.
-
-```bash
-admin@ncs# hcc dns update
-```
-
-Check the result of invoking the DNS update utility using the operational data in `/hcc/dns`:
-
-```bash
-admin@ncs# show hcc dns
-hcc dns status time 2023-10-10T20:47:31.733661+00:00
-hcc dns status exit-code 0
-hcc dns status error-message ""
-```
-
-One way to verify DNS server updates is through the `nslookup` program. However, be mindful of the DNS caching mechanism, which may cache the old value for the amount of time controlled by the TTL setting.
-
-```bash
-cisco@node-2:~$ nslookup example.com
-Server: 10.0.0.10
-Address: 10.0.0.10#53
-
-Name: example.com
-Address: 10.0.0.20
-Name: example.com
-Address: ::10
-```
-
-DNS get-node-location Action:
-
-/hcc/dns/member holds the information about all members involved in HA. The `get-node-location` action provides information on the location of an NSO node.
-
-```bash
-admin@ncs(config)# hcc dns get-node-location
-location SanJose
-```
-
-### Data Model
-
-The HCC data model can be found in the HCC package (`tailf-hcc.yang`).
-
-## Setup with an External Load Balancer
-
-As an alternative to the HCC package, NSO built-in HA, either rule-based or HA Raft, can also be used in conjunction with a load balancer device in a reverse proxy configuration. Instead of managing the virtual IP address directly as HCC does, this setup relies on an external load balancer to route traffic to the currently active primary node.
-
-
Load Balancer Routes Connections to the Appropriate NSO Node
-
-The load balancer uses HTTP health checks to determine which node is currently the active primary. The example, found in the [examples.ncs/high-availability/load-balancer](https://github.com/NSO-developer/nso-examples/tree/6.6/high-availability/load-balancer) directory uses HTTP status codes on the health check endpoint to easily distinguish whether the node is currently primary or not.
-
-In the example, freely available HAProxy software is used as a load balancer to demonstrate the functionality. It is configured to steer connections on localhost to either of the TCP port 2024 (SSH CLI) and TCP port 8080 (web UI and RESTCONF) to the active node in a 2-node HA cluster. The HAProxy software is required if you wish to run this example yourself.
-
-
Load Balancer Uses Health Checks to Determine the Currently Active Primary Node
-
-You can start all the components in the example by running the `make build start` command. At the beginning, the first node `n1` is the active primary. Connecting to the localhost port 2024 will establish a connection to this node:
-
-```bash
-$ make build start
-Setting up run directory for nso-node1
- ... make output omitted ...
-Waiting for n2 to connect: .
-$ ssh -p 2024 admin@localhost
-admin@localhost's password: admin
-
-admin connected from 127.0.0.1 using ssh on localhost
-admin@n1> switch cli
-admin@n1# show high-availability
-high-availability enabled
-high-availability status mode primary
-high-availability status current-id n1
-high-availability status assigned-role primary
-high-availability status read-only-mode false
-ID ADDRESS
----------------
-n2 127.0.0.1
-```
-
-Then, you can disable the high availability subsystem on `n1` to simulate a node failure.
-
-```bash
-admin@n1# high-availability disable
-result NSO Built-in HA disabled
-admin@n1# exit
-Connection to localhost closed.
-```
-
-Disconnect and wait a few seconds for the built-in HA to perform the failover to node `n2`. The time depends on the `high-availability/settings/reconnect-interval` and is set quite aggressively in this example to make the failover in about 6 seconds. Reconnect with the SSH client and observe the connection is now made to the fail-over node which has become the active primary:
-
-```bash
-$ ssh -p 2024 admin@localhost
-admin@localhost's password: admin
-
-admin connected from 127.0.0.1 using ssh on localhost
-admin@n2> switch cli
-admin@n2# show high-availability
-high-availability enabled
-high-availability status mode primary
-high-availability status current-id n2
-high-availability status assigned-role primary
-high-availability status read-only-mode false
-```
-
-Finally, shut down the example with the `make stop clean` command.
-
-## NB Listens to Addresses on HA Primary for Load Balancers
-
-NSO can be configured for the HA primary to listen on additional ports for the northbound interfaces NETCONF, RESTCONF, the web server (including JSON-RPC), and the CLI over SSH. Once a different node transitions to role primary the configured listen addresses are brought up on that node instead.
-
-When the following configuration is added to `ncs.conf`, then the primary HA node will listen(2) and bind(2) port 1830 on the wildcard IPv4 and IPv6 addresses.
-
-```xml
-
-
-
- true
- 0.0.0.0
- 830
-
- 0.0.0.0
- 1830
-
-
- ::
- 1830
-
-
-
-
-```
-
-A similar configuration can be added for other NB interfaces, see the ha-primary-listen list under `/ncs-config/{restconf,webui,cli}`.
-
-## HA Framework Requirements
-
-If an external HAFW is used, NSO only replicates the CDB data. NSO must be told by the HAFW which node should be primary and which nodes should be secondaries.
-
-The HA framework must also detect when nodes fail and instruct NSO accordingly. If the primary node fails, the HAFW must elect one of the remaining secondaries and appoint it the new primary. The remaining secondaries must also be informed by the HAFW about the new primary situation.
-
-### Mode of Operation
-
-NSO must be instructed through the `ncs.conf` configuration file that it should run in HA mode. The following configuration snippet enables HA mode:
-
-```xml
-
- true
- 0.0.0.0
- 4570
-
- ::
- 4569
-
- PT20S
-
-```
-
-Make sure to restart the `ncs` process for the changes to take effect.
-
-The IP address and the port above indicate which IP and which port should be used for the communication between the HA nodes. `extra-listen` is an optional list of `ip:port` pairs that a HA primary also listens to for secondary connections. For IPv6 addresses, the syntax `[ip]:port` may be used. If the `:port` is omitted, `port` is used. The `tick-timeout` is a duration indicating how often each secondary must send a tick message to the primary indicating liveness. If the primary has not received a tick from a secondary within 3 times the configured tick time, the secondary is considered to be dead. Similarly, the primary sends tick messages to all the secondaries. If a secondary has not received any tick messages from the primary within the 3 times the timeout, the secondary will consider the primary dead and report accordingly.
-
-A HA node can be in one of three states: `NONE`, `SECONDARY` or `PRIMARY`. Initially, a node is in the `NONE` state. This implies that the node will read its configuration from CDB, stored locally on file. Once the HA framework has decided whether the node should be a secondary or a primary the HAFW must invoke either the methods `Ha.beSecondary(primary)` or `Ha.bePrimary()`
-
-When an NSO HA node starts, it always starts up in mode `NONE`. At this point, there are no other nodes connected. Each NSO node reads its configuration data from the locally stored CDB and applications on or off the node may connect to NSO and read the data they need. Although write operations are allowed in the `NONE` state it is highly discouraged to initiate southbound communication unless necessary. A node in `NONE` state should only be used to configure NSO itself or to do maintenance such as upgrades. When in `NONE` state, some features are disabled, including but not limited to:
-
-* commit queue
-* NSO scheduler
-* nano-service side effect queue
-
-This is to avoid situations where multiple NSO nodes are trying to perform the same southbound operation simultaneously.
-
-At some point, the HAFW will command some nodes to become secondary nodes of a named primary node. When this happens, each secondary node tracks changes and (logically or physically) copies all the data from the primary. Previous data at the secondary node is overwritten.
-
-Note that the HAFW, by using NSO's start phases, can make sure that NSO does not start its northbound interfaces (NETCONF, CLI, ...) until the HAFW has decided what type of node it is. Furthermore once a node has been set to the `SECONDARY` state, it is not possible to initiate new write transactions towards the node. It is thus never possible for an agent to write directly into a secondary node. Once a node is returned either to the `NONE` state or to the `PRIMARY` state, write transactions can once again be initiated towards the node.
-
-The HAFW may command a secondary node to become primary at any time. The secondary node already has up-to-date data, so it simply stops receiving updates from the previous primary. Presumably, the HAFW also commands the primary node to become a secondary node or takes it down, or handles the situation somehow. If it has crashed, the HAFW tells the secondary to become primary, restarts the necessary services on the previous primary node, and gives it an appropriate role, such as secondary. This is outside the scope of NSO.
-
-Each of the primary and secondary nodes has the same set of all callpoints and validation points locally on each node. The start sequence has to make sure the corresponding daemons are started before the HAFW starts directing secondary nodes to the primary, and before replication starts. The associated callbacks will however only be executed at the primary. If e.g. the validation executing at the primary needs to read data that is not stored in the configuration and only available on another node, the validation code must perform any needed RPC calls.
-
-If the order from the HAFW is to become primary, the node will start to listen for incoming secondaries at the `ip:port` configured under `/ncs-config/ha`. The secondaries TCP connect to the primary and this socket is used by NSO to distribute the replicated data.
-
-If the order is to be a secondary, the node will contact the primary and possibly copy the entire configuration from the primary. This copy is not performed if the primary and secondary decide that they have the same version of the CDB database loaded, in which case nothing needs to be copied. This mechanism is implemented by use of a unique token, the `transaction id` - it contains the node id of the node that generated it and a time stamp, but is effectively "opaque".
-
-This transaction ID is generated by the cluster primary each time a configuration change is committed, and all nodes write the same transaction ID into their copy of the committed configuration. If the primary dies and one of the remaining secondaries is appointed the new primary, the other secondaries must be told to connect to the new primary. They will compare their last transaction ID to the one from the newly appointed primary. If they are the same, no CDB copy occurs. This will be the case unless a configuration change has sneaked in since both the new primary and the remaining secondaries will still have the last transaction ID generated by the old primary - the new primary will not generate a new transaction ID until a new configuration change is committed. The same mechanism works if a secondary node is simply restarted. No cluster reconfiguration will lead to a CDB copy unless the configuration has been changed in between.
-
-Northbound agents should run on the primary, an agent can't commit write operations at a secondary node.
-
-When an agent commits its CDB data, CDB will stream the committed data out to all registered secondaries. If a secondary dies during the commit, nothing will happen, the commit will succeed anyway. When and if the secondary reconnects to the cluster, the secondary will have to copy the entire configuration. All data on the HA sockets between NSO nodes only go in the direction from the primary to the secondaries. A secondary that isn't reading its data will eventually lead to a situation with full TCP buffers at the primary. In principle, it is the responsibility of HAFW to discover this situation and notify the primary NSO about the hanging secondary. However, if 3 times the tick timeout is exceeded, NSO will itself consider the node dead and notify the HAFW. The default value for tick timeout is 20 seconds.
-
-The primary node holds the active copy of the entire configuration data in CDB. All configuration data has to be stored in CDB for replication to work. At a secondary node, any request to read will be serviced while write requests will be refused. Thus, the CDB subscription code works the same regardless of whether the CDB client is running at the primary or at any of the secondaries. Once a secondary has received the updates associated to a commit at the primary, all CDB subscribers at the secondary will be duly notified about any changes using the normal CDB subscription mechanism.
-
-If the system has been set up to subscribe for NETCONF notifications, the secondaries will have all subscriptions as configured in the system, but the subscription will be idle. All NETCONF notifications are handled by the primary, and once the notifications get written into stable storage (CDB) at the primary, the list of received notifications will be replicated to all secondaries.
-
-## Security Aspects
-
-We specify in `ncs.conf` which IP address the primary should bind for incoming secondaries. If we choose the default value `0.0.0.0` it is the responsibility of the application to ensure that connection requests only arrive from acceptable trusted sources through some means of firewalling.
-
-A cluster is also protected by a token, a secret string only known to the application. The `Ha.connect()` method must be given the token. A secondary node that connects to a primary node negotiates with the primary using a CHAP-2-like protocol, thus both the primary and the secondary are ensured that the other end has the same token without ever revealing their own token. The token is never sent in clear text over the network. This mechanism ensures that a connection from an NSO secondary to a primary can only succeed if they both have the same token.
-
-It is indeed possible to store the token itself in CDB, thus an application can initially read the token from the local CDB data, and then use that token in . the constructor for the `Ha` class. In this case, it may very well be a good idea to have the token stored in CDB be of type tailf:aes-256-cfb-128-encrypted-string.
-
-If the actual CDB data that is sent on the wire between cluster nodes is sensitive, and the network is untrusted, the recommendation is to use IPSec between the nodes. An alternative option is to decide exactly which configuration data is sensitive and then use the tailf:aes-256-cfb-128-encrypted-string type for that data. If the configuration data is of type tailf:aes-256-cfb-128-encrypted-string the encrypted data will be sent on the wire in update messages from the primary to the secondaries.
-
-## API
-
-There are two APIs used by the HA framework to control the replication aspects of NSO. First, there exists a synchronous API used to tell NSO what to do, secondly, the application may create a notifications socket and subscribe to HA-related events where NSO notifies the application on certain HA-related events such as the loss of the primary, etc. The HA-related notifications sent by NSO are crucial to how to program the HA framework.
-
-The HA-related classes reside in the `com.tailf.ha` package. See Javadocs for reference. The HA notifications-related classes reside in the `com.tailf.notif` package, See Javadocs for reference.
-
-## Ticks
-
-The configuration parameter `/ncs-cfg/ha/tick-timeout` is by default set to 20 seconds. This means that every 20 seconds each secondary will send a tick message on the socket leading to the primary. Similarly, the primary will send a tick message every 20 seconds on every secondary socket.
-
-This aliveness detection mechanism is necessary for NSO. If a socket gets closed all is well, NSO will clean up and notify the application accordingly using the notifications API. However, if a remote node freezes, the socket will not get properly closed at the other end. NSO will distribute update data from the primary to the secondaries. If a remote node is not reading the data, TCP buffer will get full and NSO will have to start to buffer the data. NSO will buffer data for at most `tickTime` times 3 time units. If a `tick` has not been received from a remote node within that time, the node will be considered dead. NSO will report accordingly over the notifications socket and either remove the hanging secondary or, if it is a secondary that loses contact with the primary, go into the initial `NONE` state.
-
-If the HAFW can be really trusted, it is possible to set this timeout to `PT0S`, i.e zero, in which case the entire dead-node-detection mechanism in NSO is disabled.
-
-## Relay Secondaries
-
-The normal setup of an NSO HA cluster is to have all secondaries connected directly to the primary. This is a configuration that is both conceptually simple and reasonably straightforward to manage for the HAFW. In some scenarios, in particular a cluster with multiple secondaries at a location that is network-wise distant from the primary, it can however be sub-optimal, since the replicated data will be sent to each remote secondary individually over a potentially low-bandwidth network connection.
-
-To make this case more efficient, we can instruct a secondary to be a relay for other secondaries, by invoking the `Ha.beRelay()` method. This will make the secondary start listening on the IP address and port configured for HA in `ncs.conf`, and handle connections from other secondaries in the same manner as the cluster primary does. The initial CDB copy (if needed) to a new secondary will be done from the relay secondary, and when the relay secondary receives CDB data for replication from its primary, it will distribute the data to all its connected secondaries in addition to updating its own CDB copy.
-
-To instruct a node to become a secondary connected to a relay secondary, we use the `Ha.beSecondary()` method as usual, but pass the node information for the relay secondary instead of the node information for the primary. I.e. the "sub-secondary" will in effect consider the relay secondary as its primary. To instruct a relay secondary to stop being a relay, we can invoke the `Ha.beSecondary()` method with the same parameters as in the original call. This is a no-op for a "normal" secondary, but it will cause a relay secondary to stop listening for secondary connections, and disconnect any already connected "sub-secondaries".
-
-This setup requires special consideration by the HAFW. Instead of just telling each secondary to connect to the primary independently, it must set up the secondaries that are intended to be relays, and tell them to become relays, before telling the "sub-secondaries" to connect to the relay secondaries. Consider the case of a primary M and a secondary S0 in one location, and two secondaries S1 and S2 in a remote location, where we want S1 to act as relay for S2. The setup of the cluster then needs to follow this procedure:
-
-1. Tell M to be primary.
-2. Tell S0 and S1 to be secondary with M as primary.
-3. Tell S1 to be relay.
-4. Tell S2 to be secondary with S1 as primary.
-
-Conversely, the handling of network outages and node failures must also take the relay secondary setup into account. For example, if a relay secondary loses contact with its primary, it will transition to the `NONE` state just like any other secondary, and it will then disconnect its sub-secondaries which will cause those to transition to `NONE` too, since they lost contact with "their" primary. Or if a relay secondary dies in a way that is detected by its sub-secondaries, they will also transition to `NONE`. Thus in the example above, S1 and S2 needs to be handled differently. E.g. if S2 dies, the HAFW probably won't take any action, but if S1 dies, it makes sense to instruct S2 to be a secondary of M instead (and when S1 comes back, perhaps tell S2 to be a relay and S1 to be a secondary of S2).
-
-Besides the use of `Ha.beRelay()`, the API is mostly unchanged when using relay secondaries. The HA event notifications reporting the arrival or the death of a secondary are still generated only by the "real" cluster primary. If the `Ha.HaStatus()` method is used towards a relay secondary, it will report the node state as `SECONDARY_RELAY` rather than just `SECONDARY`, and the array of nodes will have its primary as the first element (same as for a "normal" secondary), followed by its "sub-secondaries" (if any).
-
-## CDB Replication
-
-When HA is enabled in `ncs.conf`, CDB automatically replicates data written on the primary to the connected secondary nodes. Replication is done on a per-transaction basis to all the secondaries in parallel and is synchronous. When NSO is in secondary mode the northbound APIs are in read-only mode, that is the configuration can not be changed on a secondary other than through replication updates from the primary. It is still possible to read from for example NETCONF or CLI (if they are enabled) on a secondary. CDB subscriptions work as usual. When NSO is in the `NONE` state CDB is unlocked and it behaves as when NSO is not in HA mode at all.
-
-Unlike configuration data, operational data is replicated only if it is defined as persistent in the data model (using the `tailf:persistent` extension).
diff --git a/administration/management/ned-administration.md b/administration/management/ned-administration.md
deleted file mode 100644
index 34ca6f4b..00000000
--- a/administration/management/ned-administration.md
+++ /dev/null
@@ -1,963 +0,0 @@
----
-description: Learn about Cisco-provided NEDs and how to manage them.
----
-
-# NED Administration
-
-This section provides necessary information on Network Element Driver (NED) administration with a focus on Cisco-provided NEDs. If you're planning to use NEDs not provided by Cisco, refer to the [NED Development](../../development/advanced-development/developing-neds/) to build your own NED packages.
-
-## NED Introduction
-
-NED represents a key NSO component that makes it possible for the NSO core system to communicate southbound with network devices in most deployments. NSO has a built-in client that can be used to communicate southbound with NETCONF-enabled devices. Many network devices are, however, not NETCONF-enabled, and there exist a wide variety of methods and protocols for configuring network devices, ranging from simple CLI to HTTP/REST-enabled devices. For such cases, it is necessary to use a NED to allow NSO communicate southbound with the network device.
-
-Even for NETCONF-enabled devices, it is possible that the NSO's built-in NETCONF client cannot be used, for instance, if the devices do not strictly follow the specification for the NETCONF protocol. In such cases, one must also use a NED to seamlessly communicate with the device. See [Managing Cisco-provided third Party YANG NEDs](ned-administration.md#sec.managing_thirdparty_neds) for more information on third-party YANG NEDs.
-
-### NED Contents and Capabilities
-
-It's important to understand the functionality of a NED and the capabilities it offers — as well as those it does not. The following summarizes what a NED contains and what it doesn't.
-
-#### **What a NED Provides**
-
-
-
-YANG Data Model
-
-The NED provides a YANG data model of the device to NSO and services, enabling standardized configuration management. This applies only to NEDs where Cisco creates and maintains the device data model—commonly referred to as classic NEDs, which includes both the CLI-based and Generic NEDs—and excludes third-party YANG (3PY) NEDs, where the model is provided externally.\
-\
-Note that for classic NEDs, the device model is typically implemented as a superset, covering multiple versions or variants of a given device type. This approach allows a single NED package to support a broad range of software versions or hardware flavors. The benefit is simplified deployment and upgrade handling across similar devices. However, a side effect is that certain parts of the model may not apply to the specific device instance in use.
-
-
-
-
-
-Data Translation
-
-The NED is responsible for transforming outbound data from NSO's internal format into a format understood by the device — whether that format is vendor-specific (e.g., CLI, REST, SOAP) or standards-based (e.g., NETCONF, RESTCONF, gNMI). It also handles the reverse transformation for inbound data from the device back into NSO's format.
-
-
-
-NSO ensures all data modifications occur within a single transaction for consistency and guarantees a transaction is either completely successful or fails, maintaining data integrity.
-
-#### **What a NED Does not Provide**
-
-
-
-A Data Model of the Entire Set in the Data
-
-For Classic NEDs, NED development is use-case driven. As a result, a NED, in most cases, does not contain the complete data model of a device. Providing a 100% complete YANG model for a device is not a goal and is not in the scope of NED development. It does not make sense to invest resources into modeling data which is not needed to support the desired use cases. If a NED does not cover a needed use case, please submit an enhancement request via your support channel. For third party NEDs, the models come from third party sources not controlled by Cisco.
-
-
-
-
-
-An Exact Copy of the Syntax in the Device CLI
-
-NED development focuses on representing device data for NSO. As a side effect for CLI NEDs, the NSO CLI will get similar behavior as the device CLI, however, in most situations, this will not be perfect and is not the goal of the NED.
-
-
-
-
-
-Fine-grained Validation of Data (Classic NEDs Only)
-
-In classic NEDs, adding strict validations in the YANG model (e.g., `mandatory`, `when`, `must`, `range`, `min`, `max`, etc.) can lead to inflexible models. These constraints are interpreted and enforced by NSO at runtime, not the device. Since such validations often need to be updated as devices evolve across versions, NSO's policy is to keep the models relaxed by minimizing the use of these validation constructs. This allows for greater flexibility and forward compatibility.
-
-
-
-
-
-Convenience Macros in the Device CLI (Only Discrete Data Leaves are Supported)
-
-Some devices have macro-style functionality in the CLI and users may find it annoying that these are not available in NEDs. The convenience macros have proven very dynamic in the parameters they change, causing frequent out-of-sync situations, but these are generally not available in the NED.
-
-
-
-
-
-Dynamic Configuration in Devices (Only Data in a Transaction May Change)
-
-Cisco NEDs do not model device-generated or dynamic configuration, as such behavior varies between device versions and is difficult to standardize. Only configuration explicitly included in a transaction is managed by NSO. If needed, service logic can insert expected dynamic elements during provisioning.
-
-
-
-
-
-Auto-correction of Parameters with Multiple Syntaxes (i.e., Use Canonical Form)
-
-The NED does not allow the same value for a parameter to have a different name (e.g., `true` vs. `yes`). The canonical name displayed in `show-running-config` or similar is used.
-
-
-
-
-
-Handling Out-of-band Changes (Model as Operational Data)
-
-Leaves that have out-of-band changes will cause NSO and the device to become out- of-sync, and should be made "config false", or not be part of the model at all. Similarly, actions that cause out-of-band changes are not supported.
-
-
-
-
-
-Splitting a Single Transaction into Several Sub-transactions
-
-For devices that support the transaction paradigm, the NED will never split an NSO transaction in two or more device transactions. The service must handle this by doing multiple NSO transactions.
-
-
-
-
-
-Backporting of Fixes to Old NED Releases (i.e., Trunk based Development is Used)
-
-All NEDs use trunk-based development, i.e., new NED releases are created from the tip of a single branch, develop. New features and fixes are thus delivered to the stakeholders in the latest NED release, not by backporting an old release.
-
-
-
-## Types of NED Packages
-
-A NED package is a package that NSO uses to manage a particular type of device. A NED is a piece of code that enables communication with a particular type of managed device. You add NEDs to NSO as a special kind of package, called NED packages.
-
-A NED package must provide a device YANG model as well as define means (protocol) to communicate with the device. The latter can either leverage the NSO built-in NETCONF and SNMP support or use a custom implementation. When a package provides custom protocol implementation, typically written in Java, it is called a CLI NED or a Generic NED.
-
-Cisco provides and supports a number of such NEDs. With these Cisco-provided NEDs, a major category are CLI NEDs which communicate with a device through its CLI instead of a dedicated API.
-
-
NED Package Types
-
-### NED Types Summary Table
-
-
NED Category
Purpose
Provider
YANG Model Provider
YANG Models Included?
Device Interface
Protocols Supported
Key Characteristics
CLI NED*
Designed for devices with a CLI-based interface. The NED parses CLI commands and translates data to/from YANG.
Cisco
Cisco NSO NED Team
Yes
CLI (Command Line Interface)
SSH, Telnet
Mimics CLI command hierarchy
Turbo parser for CLI parsing
Transform engines for data conversion
Targets devices using CLI as config interface
Generic NED - Cisco YANG Models*
Built for API-based devices (e.g., REST, SOAP, TL1), using custom parsers and data transformation logic maintained by Cisco.
Cisco
Cisco NSO NED Team
Yes
Non-CLI (API-based)
REST, TL1, CORBA, SOAP, RESTCONF, gNMI, NETCONF
Model-driven devices
YANG models mimic proprietary protocol messages
JSON/XML transformers
Custom protocol implementations
Third-party YANG NED
Cisco-supplied generic NED packages that do not include any device models.
-
-\*Also referred to as Classic NED.
-
-### CLI NED
-
-This NED category is targeted at devices that use CLI as a configuration interface. Cisco-provided CLI NEDs are available for various network devices from different vendors. Many different CLI syntaxes are supported.
-
-The driver element in a CLI NED implemented by the Cisco NSO NED team typically consists of the following three parts:
-
-* The protocol client, responsible for connecting to and interacting with the device. The protocols supported are SSH and Telnet.
-* A fast and versatile CLI parser (+ emitter), usually referred to as the turbo parser.
-* Various transform engines capable of converting data between NSO and device formats.
-
-The YANG models in a CLI NED are developed and maintained by the Cisco NSO NED team. Usually, the models for a CLI NED are structured to mimic the CLI command hierarchy on the device.
-
-
CLI NED
-
-### Generic NED
-
-A Generic NED is typically used to communicate with non-CLI devices, such as devices using protocols like REST, TL1, Corba, SOAP, RESTCONF, or gNMI as a configuration interface. Even NETCONF-enabled devices in many cases require a generic NED to function properly with NSO.
-
-The driver element in a Generic NED implemented by the Cisco NED team typically consists of the following parts:
-
-* The protocol client, responsible for interacting with the device.
-* Various transform engines capable of converting data between NSO and the device formats, usually JSON and/or XML transformers.
-
-There are two types of Generic NEDs maintained by the Cisco NSO NED team:
-
-* NEDs with Cisco-owned YANG models. These NEDs have models developed and maintained by the Cisco NSO NED team.
-* NEDs targeted at YANG models from third-party vendors, also known as, third-party YANG NEDs.
-
-### **Generic Cisco-provided NEDs with Cisco-owned YANG Models**
-
-Generic NEDs belonging to the first category typically handle devices that are not model-driven. For instance, devices using proprietary protocols based on REST, SOAP, Corba, etc. The YANG models for such NEDs are usually structured to mimic the messages used by the proprietary protocol of the device.
-
-
Generic NED
-
-### **Third-party YANG NEDs**
-
-As the name implies, this NED category is used for cases where the device YANG models are not implemented, maintained, or owned by the Cisco NSO NED team. Instead, the YANG models are typically provided by the device vendor itself, or by organizations like IETF, IEEE, ONF, or OpenConfig.
-
-This category of NEDs has some special characteristics that set them apart from all other NEDs developed by the Cisco NSO NED team:
-
-* Targeted for devices supporting model-driven protocols like NETCONF, RESTCONF, and gNMI.
-* Delivered from the software.cisco.com portal without any device YANG models included. There are several reasons for this, such as legal restrictions that prevent Cisco from re-distributing YANG models from other vendors, or the availability of several different version bundles for open-source YANG, like OpenConfig. The version used by the NED must match the version used by the targeted device.
-* The NEDs can be bundled with various fixes to solve shortcomings in the YANG models, the download sources, and/or in the device. These fixes are referred to as recipes.
-
-
Third-Party YANG NEDs
-
-Since the third-party NEDs are delivered without any device YANG models, there are additional steps required to make this category of NEDs operational:
-
-1. The device models need to be downloaded and copied into the NED package source tree. This can be done by using a special (optional) downloader tool bundled with each third-party YANG NED, or in any custom way.
-2. The NED must be rebuilt with the downloaded YANG models.
-
-This procedure is thoroughly described in [Managing Cisco-provided third-Party YANG NEDs](ned-administration.md#sec.managing_thirdparty_neds).
-
-#### **Recipes**
-
-A third-party YANG NED can be bundled with up to three types of recipe modules. These recipes are used by the NED to solve various types of issues related to:
-
-* The source of the YANG files.
-* The YANG files.
-* The device itself.
-
-The recipes represent the characteristics and the real value of a third-party YANG NED. Recipes are typically adapted for a certain bundle of YANG models and/or certain device types. This is why there exist many different third-party YANG NEDs, each one adapted for a specific protocol, a specific model package, and/or a specific device.
-
-{% hint style="info" %}
-The NSO NED team does not provide any super third-party YANG NEDs, for instance, a super RESTCONF NED that can be used with any models and any device.
-{% endhint %}
-
-**Third-party YANG NED Recipe Types**
-
-
-
-**Download Recipes (or Download Profiles)**
-
-When downloading the YANG files, it is first of all important to know which source to use. In some cases, the source is the device itself. For instance, if the device is enabled for NETCONF and sometimes for RESTCONF (in rare cases).
-
-In other cases, the device does not support model download. This applies to all gNMI-enabled devices and most RESTCONF devices too. In this case, the source can be a public Git repository or an archive file provided by the device vendor.
-
-Another important question is what YANG models and what versions to download. To make this task easier, third-party NEDs can be bundled with the download recipes (also known as download profiles). These are presets to be used with the downloader tool bundled with the NED. There can be several profiles, each representing a preset that has been verified to work by the Cisco NSO NED team. A profile can point out a certain source to download from. It can also limit the scope of the download so that only certain YANG files are selected.
-
-**YANG Recipes (YR)**
-
-Third-party YANG files can often contain various types of errors, ranging from real bugs that cause compilation errors to certain YANG constructs that are known to cause runtime issues in NSO. To ensure that the files can be built correctly, the third-party NEDs can be bundled with YANG recipes. These recipes patch the downloaded YANG files before they are built by the NSO compiler. This procedure is performed automatically by the `make` system when the NED is rebuilt after downloading the device YANG files. For more information, refer to the procedure related to rebuilding the NED with a unique NED ID in NED READMEs.
-
-In some cases, YANG recipes are also necessary when a device does not fully conform to the behavior described by its advertised YANG models. This often happens when the device is more permissive than the model suggests—for example, allowing optional parameters that the model marks as mandatory, or omitting data that is expected. Such mismatches can lead to runtime issues in NSO, such as `sync-from` failures or commit errors. YANG recipes allow patching the models to reflect the actual device behavior more accurately.
-
-**Runtime Recipes (RR)**
-
-Many devices enabled for NETCONF, RESTCONF, or gNMI sometimes deviate in their runtime behavior. This can make it impossible to interact properly with NSO. These deviations can be on any level in the runtime behavior, such as:
-
-* The configuration protocol is not properly implemented, i.e., the device lacks support for mandatory parts of, for instance, the RESTCONF RFC.
-* The device returns "dirty" configuration dumps, for instance, JSON or XML containing invalid elements.
-* Special quirks are required when applying new configuration on a device. May also require additional transforms of the payload before it is relayed by the NED.
-* The device has aliasing issues, possibly caused by overlapping YANG models. If leaf X in model A is modified, the device will automatically modify leaf Y in model B as well. While this can be a cause of deviation, note that resolving aliasing issues through runtime recipes is generally avoided by NSO, as it is typically considered a modeling error.
-
-A third-party YANG NED can be bundled with runtime recipes to solve these kinds of issues, if necessary. How this is implemented varies from NED to NED. In some cases, a NED has a fixed set of recipes that are always used. Alternatively, a NED can support several different recipes, which can be configured through a NED setting, referred to as a runtime profile. For example, a multi-vendor third-party YANG NED might have one runtime profile for each device type supported:
-
-```bash
-admin@ncs(config)# devices device dev-1 ned-settings
-onf-tapi_rc restconf profile vendor-xyz
-```
-
-### NED Settings
-
-NED settings are YANG models augmented as configurations in NSO and control the behavior of the NED. These settings are augmented under:
-
-* `/devices/global-settings/ned-settings`
-* `/devices/profiles/ned-settings`
-* `/devices/device/ned-settings`
-
-Most NEDs are instrumented with a large number of NED settings that can be used to customize the device instance configured in NSO. The README file in the respective NED contains more information on these.
-
-## Purpose of NED ID
-
-Each managed device in NSO has a device type that informs NSO how to communicate with the device. When managing NEDs, the device type is either `cli` or `generic`. The other two device types, `netconf` and `snmp`, are used in NETCONF and SNMP packages and are further described in this guide.
-
-In addition, a special NED ID identifier is needed. Simply put, this identifier is a handle in NSO pointing to the NED package. NSO uses the identifier when it is about to invoke the driver in a NED package. The identifier ensures that the driver of the correct NED package is called for a given device instance. For more information on how to set up a new device instance, see [Configuring a device with the new Cisco-provided NED](ned-administration.md#sec.config_device.with.ciscoid).
-
-Each NED package has a NED ID, which is mandatory. The NED ID is a simple string that can have any format. For NEDs developed by the Cisco NSO NED team, the NED ID is formatted as `--.`.
-
-**Examples**
-
-* `onf-tapi_rc-gen-2.0`
-* `cisco-iosxr-cli-7.43`
-
-The NED ID for a certain NED package stays the same from one version to another, as long as no backward incompatible changes have been introduced to the YANG models. Upgrading a NED from one version to another, where the NED ID is the same, is simple as it only requires replacing the old NED package with the new one in NSO and then reloading all packages. For third-party (3PY) NEDs, such as the `onf-tapi_rc` NED, the situation differs slightly. Since the YANG models originate from external sources, the NED team does not control their evolution or guarantee backward compatibility between revisions. As a result, it is the responsibility of the end user to determine whether changes in the third-party YANG models are backward compatible and to choose an appropriate version and NED ID when rebuilding the NED. Unlike classic NEDs, upgrading a 3PY NED may therefore require more careful validation and potentially a change in NED ID to reflect incompatibilities.
-
-Upgrading a NED package from one version to another, where the NED ID is not the same (typically indicated by a change of major or minor number in the NED version), requires additional steps. The new NED package first needs to be installed side-by-side with the old one. Then, a NED migration needs to be performed. This procedure is thoroughly described in [NED Migration](ned-administration.md#sec.ned_migration).
-
-The Cisco NSO NED team ensures that our CLI NEDs, as well as Generic NEDs with Cisco-owned models, have version numbers and NED ID that indicate any possible backward incompatible YANG model changes. When a NED with such an incompatible change is released, the minor digit in the version is always incremented. The case is a bit different for our third-party YANG NEDs since it is up to the end user to select the NED ID to be used. This is further described in [Managing Cisco-provided third-Party YANG NEDs](ned-administration.md#sec.managing_thirdparty_neds).
-
-### NED Versioning Scheme (Classic NEDs Only)
-
-{% hint style="warning" %}
-Not applicable to Cisco third-party NEDs.
-{% endhint %}
-
-A NED is assigned a version number consisting of a sequence of numbers separated by dots. The first two numbers represent the major and minor version, and the third number represents the maintenance version.
-
-For example, the number 5.8.1 indicates a maintenance release (1) for the minor release 5.8. Incompatible YANG model changes require either the major or minor version number to be changed. This means that any version within the 5.8.x series is backward compatible with the previous versions.
-
-When a newer maintenance release with the same major/minor version replaces a NED release, NSO can perform a simple data model upgrade to handle stored instance data in the CDB (Configuration Database). This type of upgrade does not pose a risk of data loss.
-
-However, when a NED is replaced by a new major/minor release, it becomes a NED migration. These migrations are complex because the YANG model changes can potentially result in the loss of instance data if not handled correctly.
-
-
NED Version Scheme
-
-## Installing a NED in NSO
-
-This section describes the NED installation in NSO for Local and System installs.
-
-{% tabs %}
-{% tab title="NED Installation on Local Install" %}
-{% hint style="info" %}
-This procedure below broadly outlines the steps needed to install a NED package on a [Local Install](../installation-and-deployment/local-install.md). For most up-to-date and specific installation instructions, consult the `README.md` supplied with the NED.
-{% endhint %}
-
-General instructions to install a NED package:
-
-1. Download the latest production-grade version of the NED from [software.cisco.com](https://software.cisco.com) using the URLs provided on your NED license certificates. All NED packages are files with the `.signed.bin` extension named using the following rule: `ncs---.signed.bin`.
-2. Place the NED package in the `/tmp/ned-package-store` directory and configure the environment variable `NSO_RUNDIR` to point to the NSO runtime directory.
-3. Unpack the NED package and verify its signature. The result of the unpacking is a `tar.gz` file with the same name as the `.bin` file.
-4. Untar the `.tar.gz` file. The result is a subdirectory named like `-.`.
-5. Install the NED on NSO, using the `ncs-setup` tool.
-6. Finally, open an NSO CLI session and load the new NED package.
-{% endtab %}
-
-{% tab title="NED Installation on System Install" %}
-{% hint style="info" %}
-This procedure below broadly outlines the steps needed to install a NED package on a [System Install](../installation-and-deployment/system-install.md). For most up-to-date and specific installation instructions, consult the `README.md` supplied with the NED.
-{% endhint %}
-
-General instructions to install a NED package:
-
-1. Download the latest production-grade version of the NED from [software.cisco.com](https://software.cisco.com) using the URLs provided on your NED license certificates. All NED packages are files with the `.signed.bin` extension named using the following rule: `ncs---.signed.bin`.
-2. Place the NED package in the `/tmp/ned-package-store` directory.
-3. Unpack the NED package and verify its signature. The result of the unpacking is a `.tar.gz` file with the same name as the `.bin` file.
-4. Perform an NSO backup before installing the new NED package.
-5. Start an NSO CLI session.
-6. Fetch the NED package.
-7. Install the NED package (add the argument `replace-existing` if a previous version has been loaded).
-8. Finally, load the NED package.
-{% endtab %}
-{% endtabs %}
-
-## Configuring a Device with an Installed NED
-
-Once a NED has been installed in NSO, the next step is to create and configure device entries that use this NED. The basic steps for configuring a device instance using a newly installed NED package are described in this section. Only the most basic configuration steps are covered here. Many NEDs also require additional custom configuration to be operational. This applies in particular to Generic NEDs. Information about configuration and such additional configuration can be found in the files `README.md` and `README-ned-settings.md` bundled with the NED package.
-
-The following info is necessary to proceed with the basic setup of a device instance in NSO:
-
-* NED ID of the new NED.
-* Connection information for the device to connect to (address and port).
-* Authentication information to the device (username and password).
-
-The general steps to configure a device with a NED are:
-
-1. Start an NSO CLI session.
-2. Enter the configuration mode.
-3. Configure a new authentication group to be used for this device.
-4. Configure the new device instance, such as its IP address, port, etc.
-5. Check the `README.md` and `README-ned-settings.md` bundled with the NED package for further information on additional settings to make the NED fully operational.
-6. Commit the configuration.
-
-## Managing Cisco-provided Third Party YANG NEDs
-
-The third-party YANG NED type is a special category of the generic NED type targeted for devices supporting protocols like NETCONF, RESTCONF, and gNMI. As the name implies, this NED category is used for cases where the device YANG models are not implemented or maintained by the Cisco NSO NED Team. Instead, the YANG models are typically provided by the device vendor itself or by organizations like IETF, IEEE, ONF, or OpenConfig.
-
-A third-party YANG NED package is delivered from the software.cisco.com portal without any device YANG models included. It is required that the models are first downloaded, followed by a rebuild and reload of the package, before the NED can become fully operational. This task needs to be performed by the NED user.
-
-Detailed NED-specific instructions to manage Cisco-provided third-party YANG NEDs are provided in the respective READMEs.
-
-## NED Migration
-
-If you upgrade a managed device (such as installing a new firmware), the device data model can change in a significant way. If this is the case, you usually need to use a different and newer NED with an updated YANG model.
-
-When the changes in the NED are not backward compatible, the NED is assigned a new ned-id to avoid breaking existing code. On the plus side, this allows you to use both versions of the NED at the same time, so some devices can use the new version and some can use the old one. As a result, there is no need to upgrade all devices at the same time. The downside is, NSO doesn't know the two NEDs are related and will not perform any upgrade on its own due to different ned-ids. Instead, you must manually change the NED of a managed device through a NED migration.
-
-{% hint style="info" %}
-For third-party NEDs, the end user is required to configure the NED ID and also be aware of the backward incompatibilities.
-{% endhint %}
-
-Migration is required when upgrading a NED and the NED-ID changes, which is signified by a change in either the first or the second number in the NED package version. For example, if you're upgrading the existing `router-nc-1.0.1` NED to `router-nc-1.2.0` or `router-nc-2.0.2`, you must perform NED migration. On the other hand, upgrading to `router-nc-1.0.2` or `router-nc-1.0.3` retains the same ned-id and you can upgrade the `router-1.0.1` package in place, directly replacing it with the new one. However, note that some third-party, non-Cisco packages may not adhere to this standard versioning convention. In that case, you must check the ned-id values to see whether migration is needed.
-
-
Sample NED Package Versioning
-
-A potential issue with a new NED is that it can break an existing service or other packages that rely on it. To help service developers and operators verify or upgrade the service code, NSO provides additional options of migration tooling for identifying the paths and service instances that may be impacted. Therefore, ensure that all the other packages are compatible with the new NED before you start migrating devices.
-
-To prepare for the NED migration process, first, load the new NED package into NSO with either `packages reload` or `packages add` command. Then, use the `show packages` command to verify that both NEDs, the new and the old, are present. Finally, you may perform the migration of devices either one by one or multiple at a time.
-
-Depending on your operational policies, this may be done during normal operations and does not strictly require a maintenance window, as the migration only reads from and doesn't write to a network device. Still, it is recommended that you create an NSO backup before proceeding.
-
-Note that changing a ned-id also affects device templates if you use them. To make existing device templates compatible with the new ned-id, you can use the `copy` action. It will copy the configuration used for one ned-id to another, as long as the schema nodes used haven't changed between the versions. The following example demonstrates the `copy` action usage:
-
-```bash
-admin@ncs(config)# devices template acme-ntp ned-id router-nc-1.0
-copy ned-id router-nc-1.2
-```
-
-For individual devices, use the `/devices/device/migrate` action, with the `new-ned-id` parameter. Without additional options, the command will read and update the device configuration in NSO. As part of this process, NSO migrates all the configuration and service meta-data. Use the `dry-run` option to see what the command would do and `verbose` to list all impacted service instances.
-
-You may also use the `no-networking` option to prevent NSO from generating any southbound traffic towards the device. In this case, only the device configuration in the CDB is used for the migration but then NSO can't know if the device is in sync. Afterward, you must use the **compare-config** or the **sync-from** action to remedy this.
-
-For migrating multiple devices, use the `/devices/migrate` action, which takes the same options. However, with this action, you must also specify the `old-ned-id`, which limits the migration to devices using the old NED. You can further restrict the action with the `device` parameter, selecting only specific devices.
-
-It is possible for a NED migration to fail if the new NED is not entirely backward compatible with the old one and the device has an active configuration that is incompatible with the new NED version. In such cases, NSO will produce an error with the YANG constraint that is not satisfied. Here, you must first manually adjust the device configuration to make it compatible with the new NED, and then you can perform the migration as usual.
-
-Depending on what changes are introduced by the migration and how these impact the services, it might be good to `re-deploy` the affected services before removing the old NED package. It is especially recommended in the following cases:
-
-* When the service touches a list key that has changed. As long as the old schema is loaded, NSO is able to perform an upgrade.
-* When a namespace that was used by the service has been removed. The service diffset, that is, the recorded configuration changes created by the service, will no longer be valid. The diffset is needed for the correct `get-modifications` output, `deep-check-sync`, and similar operations.
-
-## Migrating from Legacy to Third-party NED
-
-{% hint style="info" %}
-This section uses `juniper-junos_nc` as an example third-party NED. The process is generally same and applicable to other third-party NEDs.
-{% endhint %}
-
-NSO has supported Junos devices from early on. The legacy Junos NED is NETCONF-based, but as Junos devices did not provide YANG modules in the past, complex NSO machinery translated Juniper's XML Schema Description (XSD) files into a single YANG module. This was an attempt to aggregate several Juniper device modules/versions.
-
-Juniper nowadays provides YANG modules for Junos devices. Junos YANG modules can be downloaded from the device and used directly in NSO with the new `juniper-junos_nc` NED.
-
-By downloading the YANG modules using `juniper-junos_nc` NED tools and rebuilding the NED, the NED can provide full coverage immediately when the device is updated instead of waiting for a new legacy NED release.
-
-This guide describes how to replace the legacy `juniper-junos` NED and migrate NSO applications to the `juniper-junos_nc` NED using the NSO MPLS VPN example from the NSO examples collection as a reference.
-
-Prepare the example:
-
-1. Add the `juniper-junos` and `juniper-junos_nc` NED packages to the example.
-2. Configure the connection to the Junos device.
-3. Add the MPLS VPN service configuration to the simulated network, including the Junos device using the legacy `juniper-junos` NED.
-
-Adapting the service to the `juniper-junos_nc` NED:
-
-1. Un-deploy MPLS VPN service instances with `no-networking`.
-2. Delete Junos device config with `no-networking`.
-3. Set the Junos device to NETCONF/YANG compliant mode.
-4. Download the compliant YANG models, build, and reload the `juniper-junos_nc` NED package.
-5. Switch the ned-id for the Junos device to the `juniper-junos_nc` NED package.
-6. Sync from the Junos device to get the compliant Junos device config.
-7. Update the MPLS VPN service to handle the difference between the non-compliant and compliant configurations belonging to the service.
-8. Re-deploy the MPLS VPN service instances with `no-networking` to make the MPLS VPN service instances own the device configuration again.
-
-{% hint style="info" %}
-If applying the steps for this example on a production system, you should first take a backup using the `ncs-backup` tool before proceeding.
-{% endhint %}
-
-### Prepare the Example
-
-This guide uses the MPLS VPN example in Python from the NSO example set under [examples.ncs/service-management/mpls-vpn-python](https://github.com/NSO-developer/nso-examples/tree/6.6/service-management/mpls-vpn-python) to demonstrate porting an existing application to use the `juniper-junos_nc` NED. The simulated Junos device is replaced with a Junos vMX 21.1R1.11 container, but other NETCONF/YANG-compliant Junos versions also work.
-
-### **Add the `juniper-junos` and `juniper-junos_nc` NED Packages**
-
-The first step is to add the latest `juniper-junos` and `juniper-junos_nc` NED packages to the example's package directory. The NED tar-balls must be available and downloaded from your [https://software.cisco.com/download/home](https://software.cisco.com/download/home) account to the `mpls-vpn-python` example directory. Replace the `NSO_VERSION` and `NED_VERSION` variables with the versions you use:
-
-```bash
-$ cd $NCS_DIR/examples.ncs/service-management/mpls-vpn-python
-$ cp ./ncs-NSO_VERSION-juniper-junos-NED_VERSION.tar.gz packages/
-$ cd packages
-$ tar xfz ../ncs-NSO_VERSION-juniper-junos_nc-NED_VERSION.tar.gz
-$ cd -
-```
-
-Build and start the example:
-
-```bash
-$ make all start
-```
-
-### **Configure the Connection to the Junos Device**
-
-Replace the netsim device connection configuration in NSO with the configuration for connecting to the Junos device. Adjust the `USER_NAME`, `PASSWORD`, and `HOST_NAME/IP_ADDR` variables and the timeouts as required for the Junos device you are using with this example:
-
-```bash
-$ ncs_cli -u admin -C
-admin@ncs# config
-admin@ncs(config)# devices authgroups group juniper umap admin remote-name USER_NAME \
- remote-password PASSWORD
-admin@ncs(config)# devices device pe2 authgroup juniper address HOST_NAME/IP_ADDR port 830
-admin@ncs(config)# devices device pe2 connect-timeout 240
-admin@ncs(config)# devices device pe2 read-timeout 240
-admin@ncs(config)# devices device pe2 write-timeout 240
-admin@ncs(config)# commit
-admin@ncs(config)# end
-admin@ncs# exit
-```
-
-Open a CLI terminal or use NETCONF on the Junos device to verify that the `rfc-compliant` and `yang-compliant` modes are not yet enabled. Examples:
-
-```bash
-$ ssh USER_NAME@HOST_NAME/IP_ADDR
-junos> configure
-junos# show system services netconf
-ssh;
-```
-
-Or:
-
-```bash
-$ netconf-console -s plain -u USER_NAME -p PASSWORD --host=HOST_NAME/IP_ADDR \
- --port=830 --get-config
- --subtree-filter=-<<<'
-
-
-
-
-
- '
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-```
-
-The `rfc-compliant` and `yang-compliant` nodes must not be enabled yet for the legacy Junos NED to work. If enabled, delete in the Junos CLI or using NETCONF. A netconf-console example:
-
-```bash
-$ netconf-console -s plain -u USER_NAME -p PASSWORD --host=HOST_NAME/IP_ADDR --port=830
- --db=candidate
- --edit-config=- <<<'
-
-
-
-
-
-
-
-
- '
-
-$ netconf-console -s plain -u USER_NAME -p PASSWORD --host=HOST_NAME/IP_ADDR \
- --port=830 --commit
-```
-
-Back to the NSO CLI to upgrade the legacy `juniper-junos` NED to the latest version:
-
-```bash
-$ ncs_cli -u admin -C
-admin@ncs# config
-admin@ncs(config)# devices device pe2 ssh fetch-host-keys
-admin@ncs(config)# devices device pe2 migrate new-ned-id juniper-junos-nc-NED_VERSION
-admin@ncs(config)# devices sync-from
-admin@ncs(config)# end
-```
-
-### **Add the MPLS VPN Service Configuration to the Simulated Network**
-
-Turn off `autowizard` and `complete-on-space` to make it possible to paste configs:
-
-```cli
-admin@ncs# autowizard false
-admin@ncs# complete-on-space false
-```
-
-The example service config for two MPLS VPNs where the endpoints have been selected to pass through the `PE` node `PE2`, which is a Junos device:
-
-```
-vpn l3vpn ikea
-as-number 65101
-endpoint branch-office1
- ce-device ce1
- ce-interface GigabitEthernet0/11
- ip-network 10.7.7.0/24
- bandwidth 6000000
-!
-endpoint branch-office2
- ce-device ce4
- ce-interface GigabitEthernet0/18
- ip-network 10.8.8.0/24
- bandwidth 300000
-!
-endpoint main-office
- ce-device ce0
- ce-interface GigabitEthernet0/11
- ip-network 10.10.1.0/24
- bandwidth 12000000
-!
-qos qos-policy GOLD
-!
-vpn l3vpn spotify
-as-number 65202
-endpoint branch-office1
- ce-device ce5
- ce-interface GigabitEthernet0/1
- ip-network 10.2.3.0/24
- bandwidth 10000000
-!
-endpoint branch-office2
- ce-device ce3
- ce-interface GigabitEthernet0/4
- ip-network 10.4.5.0/24
- bandwidth 20000000
-!
-endpoint main-office
- ce-device ce2
- ce-interface GigabitEthernet0/8
- ip-network 10.0.1.0/24
- bandwidth 40000000
-!
-qos qos-policy GOLD
-!
-```
-
-To verify that the traffic passes through `PE2`:
-
-```cli
-admin@ncs(config)# commit dry-run outformat native
-```
-
-Toward the end of this lengthy output, observe that some config changes are going to the `PE2` device using the `http://xml.juniper.net/xnm/1.1/xnm` legacy namespace:
-
-```
-device {
- name pe2
- data
-
-
-
-
- test-then-set
- rollback-on-error
-
-
-
-
-
- xe-0/0/2
-
- 102
- Link to CE / ce5 - GigabitEthernet0/1
-
-
-
- 192.168.1.22/30
-
-
-
- 102
-
-
-
- ...
-```
-
-Looks good. Commit to the network:
-
-```cli
-admin@ncs(config)# commit
-```
-
-### Adapting the Service to the `juniper-junos_nc` NED
-
-Now that the service's configuration is in place using the legacy `juniper-junos` NED to configure the `PE2` Junos device, proceed and switch to using the `juniper-junos_nc` NED with `PE2` instead. The service template and Python code will need a few adaptations.
-
-### **Un-deploy MPLS VPN Services Instances with `no-networking`**
-
-To keep the NSO service meta-data information intact when bringing up the service with the new `juniper-junos_nc` NED, first `un-deploy` the service instances in NSO, only keeping the configuration on the devices:
-
-```cli
-admin@ncs(config)# vpn l3vpn * un-deploy no-networking
-```
-
-### **Delete Junos Device Config with `no-networking`**
-
-First, save the legacy Junos non-compliant mode device configuration to later diff against the compliant mode config:
-
-```cli
-admin@ncs(config)# show full-configuration devices device pe2 config \
- configuration | display xml | save legacy.xml
-```
-
-Delete the `PE2` configuration in NSO to prepare for retrieving it from the device in a NETCONF/YANG compliant format using the new NED:
-
-```cli
-admin@ncs(config)# no devices device pe2 config
-admin@ncs(config)# commit no-networking
-admin@ncs(config)# end
-admin@ncs# exit
-```
-
-### **Set the Junos Device to NETCONF/YANG Compliant Mode**
-
-Using the Junos CLI:
-
-```bash
-$ ssh USER_NAME@HOST_NAME/IP_ADDR
-junos> configure
-junos# set system services netconf rfc-compliant
-junos# set system services netconf yang-compliant
-junos# show system services netconf
-ssh;
-rfc-compliant;
-ÿang-compliant;
-junos# commit
-```
-
-Or, using the NSO `netconf-console` tool:
-
-```bash
-$ netconf-console -s plain -u USER_NAME -p PASSWORD --host=HOST_NAME/IP_ADDR --port=830 \
- --db=candidate
- --edit-config=- <<<'
-
-
-
-
-
-
-
-
- '
-
-$ netconf-console -s plain -u USER_NAME -p PASSWORD --host=HOST_NAME/IP_ADDR --port=830 \
- --commit
-```
-
-### **Switch the NED ID for the Junos Device to the `juniper-junos_nc` NED Package**
-
-```bash
-$ ncs_cli -u admin -C
-admin@ncs# config
-admin@ncs(config)# devices device pe2 device-type generic ned-id juniper-junos_nc-gen-1.0
-admin@ncs(config)# commit
-admin@ncs(config)# end
-```
-
-### **Download the Compliant YANG models, Build, and Load the `juniper-junos_nc` NED Package**
-
-The `juniper-junos_nc` NED is delivered without YANG modules, enabling populating it with device-specific YANG modules. The YANG modules are retrieved directly from the Junos device:
-
-```bash
-$ ncs_cli -u admin -C
-admin@ncs# devices device pe2 connect
-admin@ncs# devices device pe2 rpc rpc-get-modules get-modules
-admin@ncs# exit
-```
-
-See the `juniper-junos_nc` `README` for more options and details.
-
-Build the YANG modules retrieved from the Junos device with the `juniper-junos_nc` NED:
-
-```bash
-$ make -C packages/juniper-junos_nc-gen-1.0/src
-```
-
-Reload the packages to load the `juniper-junos_nc` NED with the added YANG modules:
-
-```bash
-$ ncs_cli -u admin -C
-admin@ncs# packages reload
-```
-
-### **Sync From the Junos Device to Get the Device Configuration in NETCONF/YANG Compliant Format**
-
-```cli
-admin@ncs# devices device pe2 sync-from
-```
-
-### **Update the MPLS VPN Service**
-
-The service must be updated to handle the difference between the Junos device's non-compliant and compliant configuration. The NSO service uses Python code to configure the Junos device using a service template. One way to find the required updates to the template and code is to check the difference between the non-compliant and compliant configurations for the parts covered by the template.
-
-
Side by Side, Running Config on the Left, Template on the Right.
-
-Checking the `packages/l3vpn/templates/l3vpn-pe.xml` service template Junos device part under the legacy `http://xml.juniper.net/xnm/1.1/xnm` namespace, you can observe that it configures `interfaces`, `routing-instances`, `policy-options`, and `class-of-service`.
-
-You can save the NETCONF/YANG compliant Junos device configuration and diff it against the non-compliant configuration from the previously stored `legacy.xml` file:
-
-```cli
-admin@ncs# show running-config devices device pe2 config configuration \
- | display xml | save new.xml
-```
-
-Examining the difference between the configuration in the `legacy.xml` and `new.xml` files for the parts covered by the service template:
-
-1. There is no longer a single namespace covering all configurations. The configuration is now divided into multiple YANG modules with a namespace for each.
-2. The `/configuration/policy-options/policy-statement/then/community` node choice identity is no longer provided with a leaf named `key1`. Instead, the leaf name is `choice-ident`, and a `choice-value` leaf is set.
-3. The `/configuration/class-of-service/interfaces/interface/unit/shaping-rate/rate` leaf format has changed from using an `int32` value to a string with either no suffix or a "k", "m" or "g" suffix. This differs from the other devices controlled by the template, so a new template `BW_SUFFIX` variable set from the Python code is needed.
-
-To enable the template to handle a Junos device in NETCONF/YANG compliant mode, add the following to the `packages/l3vpn/templates/l3vpn-pe.xml` service template:
-
-```xml
-
-
-
-+
-+
-+
-+
-+ {$PE_INT_NAME}
-+
-+
-+
-+
-+ {$VLAN_ID}
-+ Link to CE / {$CE} - {$CE_INT_NAME}
-+ {$VLAN_ID}
-+
-+
-+
-+ {$LINK_PE_ADR}/{$LINK_PREFIX}
-+
-+
-+
-+
-+
-+
-+
-+
-+ {/name}
-+ vrf
-+
-+ {$PE_INT_NAME}.{$VLAN_ID}
-+
-+
-+ {/as-number}:1
-+
-+ {/name}-IMP
-+ {/name}-EXP
-+
-+
-+
-+
-+
-+ {/name}
-+ {$LINK_PE_ADR}
-+ {/as-number}
-+
-+ 100
-+
-+
-+ {$LINK_CE_ADR}
-+
-+
-+
-+
-+
-+
-+
-+
-+ {/name}-EXP
-+
-+ bgp
-+
-+
-+
-+ add
-+
-+ {/name}-comm-exp
-+
-+
-+
-+
-+
-+ {/name}-IMP
-+
-+ bgp
-+ {/name}-comm-imp
-+
-+
-+
-+
-+
-+
-+ {/name}-comm-imp
-+ target:{/as-number}:1
-+
-+
-+ {/name}-comm-exp
-+ target:{/as-number}:1
-+
-+
-+
-+
-+
-+ {$PE_INT_NAME}
-+
-+ {$VLAN_ID}
-+
-+ {$BW_SUFFIX}
-+
-+
-+
-+
-+
-+
-
-
-
-```
-
-The Python file changes to handle the new `BW_SUFFIX` variable to generate a string with a suffix instead of an `int32`:
-
-```bash
-# of the service. These functions can be useful e.g. for
-# allocations that should be stored and existing also when the
-# service instance is removed.
-+
-+ @staticmethod
-+ def int32_to_numeric_suffix_str(val):
-+ for suffix in ["", "k", "m", "g", ""]:
-+ suffix_val = int(val / 1000)
-+ if suffix_val * 1000 != val:
-+ return str(val) + suffix
-+ val = suffix_val
-+
-@ncs.application.Service.create
-def cb_create(self, tctx, root, service, proplist):
- # The create() callback is invoked inside NCS FASTMAP and must
-```
-
-Code that uses the function and set the string to the service template:
-
-```
- tv.add('LOCAL_CE_NET', getIpAddress(endpoint.ip_network))
- tv.add('CE_MASK', getNetMask(endpoint.ip_network))
-+ tv.add('BW_SUFFIX', self.int32_to_numeric_suffix_str(endpoint.bandwidth))
- tv.add('BW', endpoint.bandwidth)
- tmpl = ncs.template.Template(service)
- tmpl.apply('l3vpn-pe', tv)
-```
-
-After making the changes to the service template and Python code, reload the updated package(s):
-
-```bash
-$ ncs_cli -u admin -C
-admin@ncs# packages reload
-```
-
-### **Re-deploy the MPLS VPN Service Instances**
-
-The service instances need to be re-deployed to own the device configuration again:
-
-```cli
-admin@ncs# vpn l3vpn * re-deploy no-networking
-```
-
-The service is now in sync with the device configuration stored in NSO CDB:
-
-```cli
-admin@ncs# vpn l3vpn * check-sync
-vpn l3vpn ikea check-sync
-in-sync true
-vpn l3vpn spotify check-sync
-in-sync true
-```
-
-When re-deploying the service instances, any issues with the added service template section for the compliant Junos device configuration, such as the added namespaces and nodes, are discovered.
-
-As there is no validation for the rate leaf string with a suffix in the Junos device model, no errors are discovered if it is provided in the wrong format until updating the Junos device. Comparing the device configuration in NSO with the configuration on the device shows such inconsistencies without having to test the configuration with the device:
-
-```cli
-admin@ncs# devices device pe2 compare-config
-```
-
-If there are issues, correct them and redo the `re-deploy no-networking` for the service instances.
-
-When all issues have been resolved, the service configuration is in sync with the device configuration, and the NSO CDB device configuration matches to the configuration on the Junos device:
-
-```bash
-$ ncs_cli -u admin -C
-admin@ncs# vpn l3vpn * re-deploy
-```
-
-The NSO service instances are now in sync with the configuration on the Junos device using the `juniper-junos_nc` NED.
-
-## Revision Merge Functionality
-
-The YANG modeling language supports the notion of a module `revision`. It allows users to distinguish between different versions of a module, so the module can evolve over time. If you wish to use a new revision of a module for a managed device, for example, to access new features, you generally need to create a new NED.
-
-When a model evolves quickly and you have many devices that require the use of a lot of different revisions, you will need to maintain a high number of NEDs, which are mostly the same. This can become especially burdensome during NSO version upgrades, when all NEDs may need to be recompiled.
-
-When a YANG module is only updated in a backward-compatible way (following the upgrade rules in RFC6020 or RFC7950), the NSO compiler, `ncsc`, allows you to pack multiple module revisions into the same package. This way, a single NED with multiple device model revisions can be used, instead of multiple NEDs. Based on the capabilities exchange, NSO will then use the correct revision for communication with each device.
-
-However, there is a major downside to this approach. While the exact revision is known for each communication session with the managed device, the device model in NSO does not have that information. For that reason, the device model always uses the latest revision. When pushing configuration to a device that only supports an older revision, NSO silently drops the unsupported parts. This may have surprising results, as the NSO copy can contain configuration that is not really supported on the device. Use the `no-revision-drop` commit parameter when you want to make sure you are not committing config that is not supported by a device.
-
-If you still wish to use this functionality, you can create a NED package with the `ncs-make-package --netconf-ned` command as you would otherwise. However, the supplied source YANG directory should contain YANG modules with different revisions. The files should follow the _`module-or-submodule-name`_`@`_`revision-date`_`.yang` naming convention, as specified in the RFC6020. Some versions of the compiler require you to use the `--no-fail-on-warnings` option with the `ncs-make-package` command or the build process may fail.
-
-The [examples.ncs/device-management/ned-yang-revision](https://github.com/NSO-developer/nso-examples/tree/6.6/device-management/ned-yang-revision) example shows how you can perform a YANG model upgrade. The original, 1.0 version of the router NED uses the `router@2020-02-27.yang` YANG model. First, it is updated to the version 1.0.1 `router@2020-09-18.yang` using a revision merge approach. This is possible because the changes are backward-compatible.
-
-In the second part of the example, the updates in `router@2022-01-25.yang` introduce breaking changes, therefore the version is increased to 1.1 and a different NED-ID is assigned to the NED. In this case, you can't use revision merge and the usual NED migration procedure is required.
diff --git a/administration/management/package-mgmt.md b/administration/management/package-mgmt.md
deleted file mode 100644
index cf4fbbff..00000000
--- a/administration/management/package-mgmt.md
+++ /dev/null
@@ -1,265 +0,0 @@
----
-description: Perform package management tasks.
----
-
-# Package Management
-
-All user code that needs to run in NSO must be part of a package. A package is basically a directory of files with a fixed file structure or a tar archive with the same directory layout. A package consists of code, YANG modules, etc., that are needed to add an application or function to NSO. Packages are a controlled way to manage loading and versions of custom applications.
-
-Network Element Drivers (NEDs) are also packages. Each NED allows NSO to manage a network device of a specific type. Except for third-party YANG NED packages which do not contain a YANG device model by default (and must be downloaded and fixed before adding to the package), a NED typically contains a device YANG model and the code, specifying how NSO should connect to the device. For NETCONF devices, NSO includes built-in tools to help you build a NED, as described in [NED Administration](ned-administration.md), that you can use if needed. Otherwise, a third-party YANG NED, if available, should be used instead. Vendors, in some cases, provide the required YANG device models but not the entire NED. In practice, all NSO instances use at least one NED. The set of used NED packages depends on the number of different device types the NSO manages.
-
-When NSO starts, it searches for packages to load. The `ncs.conf` parameter `/ncs-config/load-path` defines a list of directories. At initial startup, NSO searches these directories for packages and copies the packages to a private directory tree in the directory defined by the `/ncs-config/state-dir` parameter in `ncs.conf`, and loads and starts all the packages found. On subsequent startups, NSO will by default only load and start the copied packages. The purpose of this procedure is to make it possible to reliably load new or updated packages while NSO is running, with a fallback to the previously existing version of the packages if the reload should fail.
-
-In a System Install of NSO, packages are always installed (normally through symbolic links) in the `packages` subdirectory of the run directory, i.e. by default `/var/opt/ncs/packages`, and the private directory tree is created in the `state` subdirectory, i.e. by default `/var/opt/ncs/state`.
-
-## Loading Packages
-
-Loading of new or updated packages (as well as removal of packages that should no longer be used) can be requested via the `reload` action - from the NSO CLI:
-
-```bash
-admin@ncs# packages reload
-reload-result {
- package cisco-ios
- result true
-}
-```
-
-This request makes NSO copy all packages found in the load path to a temporary version of its private directory, and load the packages from this directory. If the loading is successful, this temporary directory will be made permanent, otherwise, the temporary directory is removed and NSO continues to use the previous version of the packages. Thus when updating packages, always update the version in the load path, and request that NSO does the reload via this action.
-
-If the package changes include modified, added, or deleted `.fxs` files or `.ccl` files, NSO needs to run a data model upgrade procedure, also called a CDB upgrade. NSO provides a `dry-run` option to `packages reload` action to test the upgrade without committing the changes. Using a reload dry-run, you can tell if a CDB upgrade is needed or not.
-
-The `report all-schema-changes` option of the reload action instructs NSO to produce a report of how the current data model schema is being changed. Combined with a dry run, the report allows you to verify the modifications introduced with the new versions of the packages before actually performing the upgrade.
-
-For a data model upgrade, including a dry run, all transactions must be closed. In particular, users having CLI sessions in configure mode must exit to operational mode. If there are ongoing commit queue items, and the `wait-commit-queue-empty` parameter is supplied, it will wait for the items to finish before proceeding with the reload. During this time, it will not allow the creation of any new transactions. Hence, if one of the queue items fails with `rollback-on-error` option set, the commit queue's rollback will also fail, and the queue item will be locked. In this case, the reload will be canceled. A manual investigation of the failure is needed in order to proceed with the reload.
-
-While the data model upgrade is in progress, all transactions are closed and new transactions are not allowed. This means that starting a new management session, such as a CLI or SSH connection to the NSO, will also fail, producing an error that the node is in upgrade mode.
-
-By default, the `reload` action will (when needed) wait up to 10 seconds for the commit queue to empty (if the `wait-commit-queue-empty` parameter is entered) and reload to start.
-
-If there are still open transactions at the end of this period, the upgrade will be canceled and the reload operation will fail. The `max-wait-time` and `timeout-action` parameters to the action can modify this behavior. For example, to wait for up to 30 seconds, and forcibly terminate any transactions that still remain open after this period, we can invoke the action as:
-
-```cli
-admin@ncs# packages reload max-wait-time 30 timeout-action kill
-```
-
-Thus the default values for these parameters are `10` and `fail`, respectively. In case there are no changes to `.fxs` or .`ccl` files, the reload can be carried out without the data model upgrade procedure, and these parameters are ignored since there is no need to close open transactions.
-
-When reloading packages, NSO will give a warning when the upgrade looks suspicious, i.e., may break some functionality. Note that this is not a strict upgrade validation, but only intended as a hint to the NSO administrator early in the upgrade process that something might be wrong. Currently, the following scenarios will trigger the warnings:
-
-* One or more namespaces are removed by the upgrade. The consequence of this is all data belonging to this namespace is permanently deleted from CDB upon upgrade. This may be intended in some scenarios, in which case it is advised to proceed with overriding warnings as described below.
-* There are source `.java` files found in the package, but no matching `.class` files in the jars loaded by NSO. This likely means that the package has not been compiled.
-* There are matching `.class` files with modification time older than the source files, which hints that the source has been modified since the last time the package was compiled. This likely means that the package was not re-compiled the last time the source code was changed.
-
-If a warning has been triggered it is a strong recommendation to fix the root cause. If all of the warnings are intended, it is possible to proceed with `packages reload force` command.
-
-In some specific situations, upgrading a package with newly added custom validation points in the data model may produce an error similar to `no registration found for callpoint NEW-VALIDATION/validate` or simply `application communication failure`, resulting in an aborted upgrade. See [New Validation Points](../../development/core-concepts/using-cdb.md#cdb.upgrade-add-vp) on how to proceed.
-
-In some cases, we may want NSO to do the same operation as the `reload` action at NSO startup, i.e. copy all packages from the load path before loading, even though the private directory copy already exists. This can be achieved in the following ways:
-
-* Setting the shell environment variable `$NCS_RELOAD_PACKAGES` to `true`. This will make NSO do the copy from the load path on every startup, as long as the environment variable is set. In a System Install, NSO is typically started as a `systemd` system service, and `NCS_RELOAD_PACKAGES=true` can be set in `/etc/ncs/ncs.systemd.conf` temporarily to reload the packages.
-* Giving the option `--with-package-reload` to the `ncs` command when starting NSO. This will make NSO do the copy from the load path on this particular startup, without affecting the behavior on subsequent startups.
-* If warnings are encountered when reloading packages at startup using one of the options above, the recommended way forward is to fix the root cause as indicated by the warnings as mentioned before. If the intention is to proceed with the upgrade without fixing the underlying cause for the warnings, it is possible to force the upgrade using `NCS_RELOAD_PACKAGES`=`force` environment variable or `--with-package-reload-force` option.
-
-Always use one of these methods when upgrading to a new version of NSO in an existing directory structure, to make sure that new packages are loaded together with the other parts of the new system.
-
-## Redeploying Packages
-
-If it is known in advance that there were no data model changes, i.e. none of the `.fxs` or `.ccl` files changed, and none of the shared JARs changed in a Java package, and the declaration of the components in the `package-meta-data.xml` is unchanged, then it is possible to do a lightweight package upgrade, called package redeploy. Package redeploy only loads the specified package, unlike packages reload which loads all of the packages found in the load-path.
-
-```bash
-admin@ncs# packages package mserv redeploy
-result true
-```
-
-Redeploying a package allows you to reload updated or load new templates, reload private JARs for a Java package, or reload the Python code which is a part of this package. Only the changed part of the package will be reloaded, e.g. if there were no changes to Python code, but only templates, then the Python VM will not be restarted, but only templates reloaded. The upgrade is not seamless however as the old templates will be unloaded for a short while before the new ones are loaded, so any user of the template during this period of time will fail; the same applies to changed Java or Python code. It is hence the responsibility of the user to make sure that the services or other code provided by the package is unused while it is being redeployed.
-
-The `package redeploy` will return `true` if the package's resulting status after the redeploy is `up`. Consequently, if the result of the action is `false`, then it is advised to check the operational status of the package in the package list.
-
-```bash
-admin@ncs# show packages package mserv oper-status
-oper-status file-load-error
-oper-status error-info "template3.xml:2 Unknown servicepoint: templ42-servicepoint"
-```
-
-## Adding NED Packages
-
-Unlike a full `packages reload` operation, new NED packages can be loaded into the system without disrupting existing transactions. This is only possible for new packages, since these packages don't yet have any instance data.
-
-The operation is performed through the `/packages/add` action. No additional input is necessary. The operation scans all the load-paths for any new NED packages and also verifies the existing packages are still present. If packages are modified or deleted, the operation will fail.
-
-Each NED package defines `ned-id`, an identifier that is used in selecting the NED for each managed device. A new NED package is therefore a package with a ned-id value that is not already in use.
-
-In addition, the system imposes some additional constraints, so it is not always possible to add just any arbitrary NED. In particular, NED packages can also contain one or more shared data models, such as NED settings or operational data for private use by the NED, that are not specific to each version of NED package but rather shared between all versions. These are typically placed outside any mount point (device-specific data model), extending the NSO schema directly. So, if a NED defines schema nodes outside any mount point, there must be no changes to these nodes if they already exist.
-
-Adding a NED package with a modified shared data model is therefore not allowed and all shared data models are verified to be identical before a NED package can be added. If they are not, the `/packages/add` action will fail and you will have to use the `/packages/reload` command.
-
-```bash
-admin@ncs# packages add
-add-result {
- package router-nc-1.1
- result true
-}
-```
-
-The command returns `true` if the package's resulting status after deployment is `up`. Likewise, if the result for a package is `false`, then the package was added but its code has not started successfully and you should check the operational status of the package with the `show packages package oper-status` command for additional information. You may then use the `/packages/package/redeploy` action to retry deploying the package's code, once you have corrected the error.
-
-{% hint style="info" %}
-In a high-availability setup, you can perform this same operation on all the nodes in the cluster with a single `packages ha sync and-add` command.
-{% endhint %}
-
-## Managing Packages
-
-In a System Install of NSO, management of pre-built packages is supported through a number of actions. This support is not available in a Local Install, since it is dependent on the directory structure created by the System Install. Please refer to the YANG submodule `$NCS_DIR/src/ncs/yang/tailf-ncs-software.yang` for the full details of the functionality described in this section.
-
-### Actions
-
-Actions are provided to list local packages, to fetch packages from the file system, and to install or deinstall packages:
-
-* `software packages list [...]`: List local packages, categorized into loaded, installed, and installable. The listing can be restricted to only one of the categories - otherwise, each package listed will include the category for the package.
-* `software packages fetch package-from-file `: Fetch a package by copying it from the file system, making it installable.
-* `software packages install package [...]`: Install a package, making it available for loading via the `packages reload` action, or via a system restart with package reload. The action ensures that only one version of the package is installed - if any version of the package is installed already, the `replace-existing` option can be used to deinstall it before proceeding with the installation.
-* `software packages deinstall package `: Deinstall a package, i.e. remove it from the set of packages available for loading.
-
-There is also an `upload` action that can be used via NETCONF or REST to upload a package from the local host to the NSO host, making it installable there. It is not feasible to use in the CLI or Web UI, since the actual package file contents is a parameter for the action. It is also not suitable for very large (more than a few megabytes) packages, since the processing of action parameters is not designed to deal with very large values, and there is a significant memory overhead in the processing of such values.
-
-## More on Package Management
-
-NSO Packages contain data models and code for a specific function. It might be NED for a specific device, a service application like MPLS VPN, a WebUI customization package, etc. Packages can be added, removed, and upgraded in run-time. A common task is to add a package to NSO to support a new device type or upgrade an existing package when the device is upgraded.
-
-(We assume you have the example up and running from the previous section). Currently installed packages can be viewed with the following command:
-
-```bash
-admin@ncs# show packages
-packages package cisco-ios
- package-version 3.0
- description "NED package for Cisco IOS"
- ncs-min-version [ 3.0.2 ]
- directory ./state/packages-in-use/1/cisco-ios-cli-3.0
- component upgrade-ned-id
- upgrade java-class-name com.tailf.packages.ned.ios.UpgradeNedId
- component cisco-ios
- ned cli ned-id cisco-ios-cli-3.0
- ned cli java-class-name com.tailf.packages.ned.ios.IOSNedCli
- ned device vendor Cisco
-NAME VALUE
----------------------
-show-tag interface
-
- oper-status up
-```
-
-So the above command shows that NSO currently has one package, the NED for Cisco IOS.
-
-NSO reads global configuration parameters from `ncs.conf`. More on NSO configuration later in this guide. By default, it tells NSO to look for packages in a `packages` directory where NSO was started. Using the [examples.ncs/device-management/simulated-cisco-ios](https://github.com/NSO-developer/nso-examples/tree/6.6/device-management/simulated-cisco-ios) example to demonstrate:
-
-```bash
-$ pwd
-examples.ncs/device-management/simulated-cisco-ios
-$ NONINTERACTIVE=1 ./demo.sh
-$ ls packages/
-cisco-ios-cli-3.0
-$ ls packages/cisco-ios-cli-3.0
-doc
-load-dir
-netsim
-package-meta-data.xml
-private-jar
-shared-jar
-src
-```
-
-As seen above a package is a defined file structure with data models, code, and documentation. NSO comes with a few ready-made example packages: `$NCS_DIR/packages/`. Also, there is a library of packages available from Tail-f, especially for supporting specific devices.
-
-### Adding and Upgrading a Package
-
-Assume you would like to add support for Nexus devices to the example. Nexus devices have different data models and another CLI flavor. There is a package for that in `$NCS_DIR/packages/neds/nexus`.
-
-We can keep NSO running all the time, but we will stop the network simulator to add the Nexus devices to the simulator.
-
-```bash
-$ ncs-netsim stop
-```
-
-Add the nexus package to the NSO runtime directory by creating a symbolic link:
-
-```bash
-$ cd $NCS_DIR/examples.ncs/device-management/simulated-cisco-ios/packages
-$ ln -s $NCS_DIR/packages/neds/cisco-nx-cli-3.0 cisco-nx-cli-3.0
-$ ls -l
-...
-cisco-nx-cli-3.0 -> $NCS_DIR/packages/neds/cisco-nx-cli-3.0
-```
-
-The package is now in place, but until we tell NSO to look for package changes nothing happens:
-
-```bash
- admin@ncs# show packages packages package
- cisco-ios ... admin@ncs# packages reload
-
->>> System upgrade is starting.
->>> Sessions in configure mode must exit to operational mode.
->>> No configuration changes can be performed until upgrade has
-completed.
->>> System upgrade has completed successfully.
-reload-result {
- package cisco-ios
- result true
-}
-reload-result {
- package cisco-nx
- result true
-}
-```
-
-So after the `packages reload` operation NSO also knows about Nexus devices. The reload operation also takes any changes to existing packages into account. The data store is automatically upgraded to cater to any changes like added attributes to existing configuration data.
-
-### Simulating the New Device
-
-```bash
-$ ncs-netsim add-to-network cisco-nx-cli-3.0 2 n
-$ ncs-netsim list
-ncs-netsim list for examples.ncs/device-management/simulated-cisco-ios/netsim
-
-name=c0 ...
-name=c1 ...
-name=c2 ...
-name=n0 ...
-name=n1 ...
-
-
-$ ncs-netsim start
-DEVICE c0 OK STARTED
-DEVICE c1 OK STARTED
-DEVICE c2 OK STARTED
-DEVICE n0 OK STARTED
-DEVICE n1 OK STARTED
-$ ncs-netsim cli-c n0
-n0#show running-config
-no feature ssh
-no feature telnet
-fex 101
- pinning max-links 1
-!
-fex 102
- pinning max-links 1
-!
-nexus:vlan 1
-!
-...
-```
-
-### Adding the New Devices to NSO
-
-We can now add these Nexus devices to NSO according to the below sequence:
-
-```bash
-admin@ncs(config)# devices device n0 device-type cli ned-id cisco-nx-cli-3.0
-admin@ncs(config-device-n0)# port 10025
-admin@ncs(config-device-n0)# address 127.0.0.1
-admin@ncs(config-device-n0)# authgroup default
-admin@ncs(config-device-n0)# state admin-state unlocked
-admin@ncs(config-device-n0)# commit
-admin@ncs(config-device-n0)# top
-admin@ncs(config)# devices device n0 sync-from
-result true
-```
diff --git a/administration/management/system-management/README.md b/administration/management/system-management/README.md
deleted file mode 100644
index 20a2e6fa..00000000
--- a/administration/management/system-management/README.md
+++ /dev/null
@@ -1,763 +0,0 @@
----
-description: Perform NSO system management and configuration.
----
-
-# System Management
-
-NSO consists of a number of modules and executable components. These executable components will be referred to by their command-line name, e.g. `ncs`, `ncs-netsim`, `ncs_cli`, etc. `ncs` is used to refer to the executable, the running daemon.
-
-## Starting NSO
-
-When NSO is started, it reads its configuration file and starts all subsystems configured to start (such as NETCONF, CLI, etc.).
-
-By default, NSO starts in the background without an associated terminal. It is recommended to use a [System Install](../../installation-and-deployment/system-install.md) when installing NSO for production deployment. This will create an `init` script that starts NSO when the system boots, and makes NSO start the service manager.
-
-## Licensing NSO
-
-NSO is licensed using Cisco Smart Licensing. To register your NSO instance, you need to enter a token from your Cisco Smart Software Manager account. For more information on this topic, see [Cisco Smart Licensing](cisco-smart-licensing.md)_._
-
-## Configuring NSO
-
-NSO is configured in the following two ways:
-
-* Through its configuration file, `ncs.conf`.
-* Through whatever data is configured at run-time over any northbound, for example, turning on trace using the CLI.
-
-### `ncs.conf` File
-
-The configuration file `ncs.conf` is read at startup and can be reloaded. Below is an example of the most common settings. It is included here as an example and should be self-explanatory. See [ncs.conf](../../../resources/man/ncs.conf.5.md) in Manual Pages for more information. Important configuration settings are:
-
-* `load-path`: where NSO should look for compiled YANG files, such as data models for NEDs or Services.
-* `db-dir`: the directory on disk that CDB uses for its storage and any temporary files being used. It is also the directory where CDB searches for initialization files. This should be a local disk and not NFS mounted for performance reasons.
-* Various log settings.
-* AAA configuration.
-* Rollback file directory and history length.
-* Enabling north-bound interfaces like REST, and WebUI.
-* Enabling of High-Availability mode.
-
-The `ncs.conf` file is described in the [NSO Manual Pages](../../../resources/man/ncs.conf.5.md). There is a large number of configuration items in `ncs.conf`, most of them have sane default values. The `ncs.conf` file is an XML file that must adhere to the `tailf-ncs-config.yang` model. If we start the NSO daemon directly, we must provide the path to the NCS configuration file as in:
-
-```bash
-# ncs -c /etc/ncs/ncs.conf
-```
-
-However, in a System Install, `systemd` is typically used to start NSO, and it will pass the appropriate options to the `ncs` command. Thus, NSO is started with the command:
-
-```bash
-# systemctl nso start
-```
-
-It is possible to edit the `ncs.conf` file, and then tell NSO to reload the edited file without restarting the daemon as in:
-
-```bash
-# ncs --reload
-```
-
-This command also tells NSO to close and reopen all log files, which makes it suitable to use from a system like `logrotate`.
-
-In this section, some of the important configuration settings will be described and discussed.
-
-### Exposed Interfaces
-
-NSO allows access through a number of different interfaces, depending on the use case. In the default configuration, clients can access the system locally through an unauthenticated IPC socket (with the `ncs*` family of commands, port 4569) and plain (non-HTTPS) HTTP web server (port 8080). Additionally, the system enables remote access through SSH-secured NETCONF and CLI (ports 2022 and 2024).
-
-We strongly encourage you to review and customize the exposed interfaces to your needs in the `ncs.conf` configuration file. In particular, set:
-
-* `/ncs-config/webui/match-host-name` to `true`.
-* `/ncs-config/webui/server-name` to the hostname of the server.
-* `/ncs-config/webui/server-alias` to additional domains or IP addresses used for serving HTTP(S).
-
-If you decide to allow remote access to the web server, make sure you use TLS-secured HTTPS instead of HTTP and keep `match-host-name` enabled. Not doing so exposes you to security risks.
-
-{% hint style="info" %}
-Using `/ncs-config/webui/match-host-name = true` requires you to use the configured hostname when accessing the server. Web browsers do this automatically but you may need to set the `Host` header when performing requests programmatically using an IP address instead of the hostname.
-{% endhint %}
-
-To additionally secure IPC access, refer to [Restricting Access to the IPC Socket](../../advanced-topics/ipc-connection.md#restricting-access-to-the-ipc-socket).
-
-For more details on individual interfaces and their use, see [Northbound APIs](../../../development/core-concepts/northbound-apis/).
-
-### Dynamic Configuration
-
-Let's look at all the settings that can be manipulated through the NSO northbound interfaces. NSO itself has a number of built-in YANG modules. These YANG modules describe the structure that is stored in CDB. Whenever we change anything under, say `/devices/device`, it will change the CDB, but it will also change the configuration of NSO. We call this dynamic configuration since it can be changed at will through all northbound APIs.
-
-We summarize the most relevant parts below:
-
-```cli
-ncs@ncs(config)#
-Possible completions:
- aaa AAA management, users and groups
- cluster Cluster configuration
- devices Device communication settings
- java-vm Control of the NCS Java VM
- nacm Access control
- packages Installed packages
- python-vm Control of the NCS Python VM
- services Global settings for services, (the services themselves might be augmented somewhere else)
- session Global default CLI session parameters
- snmp Top-level container for SNMP related configuration and status objects.
- snmp-notification-receiver Configure reception of SNMP notifications
- software Software management
- ssh Global SSH connection configuration
-```
-
-#### **`tailf-ncs.yang` Module**
-
-This is the most important YANG module that is used to control and configure NSO. The module can be found at: `$NCS_DIR/src/ncs/yang/tailf-ncs.yang` in the release. Everything in that module is available through the northbound APIs. The YANG module has descriptions for everything that can be configured.
-
-`tailf-common-monitoring2.yang` and `tailf-ncs-monitoring2.yang` are two modules that are relevant to monitoring NSO.
-
-### Built-in or External SSH Server
-
-NSO has a built-in SSH server which makes it possible to SSH directly into the NSO daemon. Both the NSO northbound NETCONF agent and the CLI need SSH. To configure the built-in SSH server we need a directory with server SSH keys - it is specified via `/ncs-config/aaa/ssh-server-key-dir` in `ncs.conf`. We also need to enable `/ncs-config/netconf-north-bound/transport/ssh` and `/ncs-config/cli/ssh` in `ncs.conf`. In a System Install, `ncs.conf` is installed in the "config directory", by default `/etc/ncs`, with the SSH server keys in `/etc/ncs/ssh`.
-
-### Run-time Configuration
-
-There are also configuration parameters that are more related to how NSO behaves when talking to the devices. These reside in `devices global-settings`.
-
-```cli
-admin@ncs(config)# devices global-settings
-Possible completions:
- backlog-auto-run Auto-run the backlog at successful connection
- backlog-enabled Backlog requests to non-responding devices
- commit-queue
- commit-retries Retry commits on transient errors
- connect-timeout Timeout in seconds for new connections
- ned-settings Control which device capabilities NCS uses
- out-of-sync-commit-behaviour Specifies the behaviour of a commit operation involving a device that is out of sync with NCS.
- read-timeout Timeout in seconds used when reading data
- report-multiple-errors By default, when the NCS device manager commits data southbound and when there are errors, we only
- report the first error to the operator, this flag makes NCS report all errors reported by managed
- devices
- trace Trace the southbound communication to devices
- trace-dir The directory where trace files are stored
- write-timeout Timeout in seconds used when writing
- data
-```
-
-## User Management
-
-Users are configured at the path `aaa authentication users`.
-
-```cli
-admin@ncs(config)# show full-configuration aaa authentication users user
-aaa authentication users user admin
- uid 1000
- gid 1000
- password $1$GNwimSPV$E82za8AaDxukAi8Ya8eSR.
- ssh_keydir /var/ncs/homes/admin/.ssh
- homedir /var/ncs/homes/admin
-!
-aaa authentication users user oper
- uid 1000
- gid 1000
- password $1$yOstEhXy$nYKOQgslCPyv9metoQALA.
- ssh_keydir /var/ncs/homes/oper/.ssh
- homedir /var/ncs/homes/oper
-!...
-```
-
-Access control, including group memberships, is managed using the NACM model (RFC 6536).
-
-```cli
-admin@ncs(config)# show full-configuration nacm
-nacm write-default permit
-nacm groups group admin
- user-name [ admin private ]
-!
-nacm groups group oper
- user-name [ oper public ]
-!
-nacm rule-list admin
- group [ admin ]
- rule any-access
- action permit
- !
-!
-nacm rule-list any-group
- group [ * ]
- rule tailf-aaa-authentication
- module-name tailf-aaa
- path /aaa/authentication/users/user[name='$USER']
- access-operations read,update
- action permit
- !
-```
-
-### Adding a User
-
-Adding a user includes the following steps:
-
-1. Create the user: `admin@ncs(config)# aaa authentication users user `.
-2. Add the user to a NACM group: `admin@ncs(config)# nacm groups admin user-name `.
-3. Verify/change access rules.
-
-It is likely that the new user also needs access to work with device configuration. The mapping from NSO users and corresponding device authentication is configured in `authgroups`. So, the user needs to be added there as well.
-
-```cli
-admin@ncs(config)# show full-configuration devices authgroups
-devices authgroups group default
- umap admin
- remote-name admin
- remote-password $4$wIo7Yd068FRwhYYI0d4IDw==
- !
- umap oper
- remote-name oper
- remote-password $4$zp4zerM68FRwhYYI0d4IDw==
- !
-!
-```
-
-If the last step is forgotten, you will see the following error:
-
-```cli
-jim@ncs(config)# devices device c0 config ios:snmp-server community fee
-jim@ncs(config-config)# commit
-Aborted: Resource authgroup for jim doesn't exist
-```
-
-## Monitoring NSO
-
-This section describes how to monitor NSO. See also [NSO Alarms](./#nso-alarms).
-
-Use the command `ncs --status` to get runtime information on NSO.
-
-### NSO Status
-
-Checking the overall status of NSO can be done using the shell:
-
-```bash
-$ ncs --status
-```
-
-Or, in the CLI:
-
-```cli
-ncs# show ncs-state
-```
-
-For details on the output see `$NCS_DIR/src/yang/tailf-common-monitoring2.yang`.
-
-Below is an overview of the output:
-
-
daemon-status
You can see the NSO daemon mode, starting, phase0, phase1, started, stopping. The phase0 and phase1 modes are schema upgrade modes and will appear if you have upgraded any data models.
version
The NSO version.
smp
Number of threads used by the daemon.
ha
The High-Availability mode of the NCS daemon will show up here: secondary, primary, relay-secondary.
internal/callpoints
The next section is callpoints. Make sure that any validation points, etc. are registered. (The ncs-rfs-service-hook is an obsolete callpoint, ignore this one).
UNKNOWN code tries to register a call-point that does not exist in a data model.
NOT-REGISTERED a loaded data model has a call-point but no code has registered.
Of special interest is of course the servicepoints. All your deployed service models should have a corresponding service-point. For example:
The cdb section is important. Look for any locks. This might be a sign that a developer has taken a CDB lock without releasing it. The subscriber section is also important. A design pattern is to register subscribers to wait for something to change in NSO and then trigger an action. Reactive FASTMAP is designed around that. Validate that all expected subscribers are OK.
loaded-data-models
The next section shows all namespaces and YANG modules that are loaded. If you, for example, are missing a service model, make sure it is loaded.
cli,netconf,rest,snmp,webui
All northbound agents like CLI, REST, NETCONF, SNMP, etc. are listed with their IP and port. So if you want to connect over REST, for example, you can see the port number here.
patches
Lists any installed patches.
upgrade-mode
If the node is in upgrade mode, it is not possible to get any information from the system over NETCONF. Existing CLI sessions can get system information.
-
-It is also important to look at the packages that are loaded. This can be done in the CLI with:
-
-```
-admin> show packages
-packages package cisco-asa
- package-version 3.4.0
- description "NED package for Cisco ASA"
- ncs-min-version [ 3.2.2 3.3 3.4 4.0 ]
- directory ./state/packages-in-use/1/cisco-asa
- component upgrade-ned-id
- upgrade java-class-name com.tailf.packages.ned.asa.UpgradeNedId
- component ASADp
- callback java-class-name [ com.tailf.packages.ned.asa.ASADp ]
- component cisco-asa
- ned cli ned-id cisco-asa
- ned cli java-class-name com.tailf.packages.ned.asa.ASANedCli
- ned device vendor Cisco
-```
-
-### Monitoring the NSO Daemon
-
-NSO runs the following processes:
-
-* **The daemon**: `ncs.smp`: this is the NCS process running in the Erlang VM.
-* **Java VM**: `com.tailf.ncs.NcsJVMLauncher`: service applications implemented in Java run in this VM. There are several options on how to start the Java VM, it can be monitored and started/restarted by NSO or by an external monitor. See the [ncs.conf(5)](../../../resources/man/ncs.conf.5.md) Manual Page and the `java-vm` settings in the CLI.
-* **Python VMs**: NSO packages can be implemented in Python. The individual packages can be configured to run a VM each or share a Python VM. Use the `show python-vm status current` to see current threads and `show python-vm status start` to see which threads were started at startup time.
-
-### Logging
-
-NSO has extensive logging functionality. Log settings are typically very different for a production system compared to a development system. Furthermore, the logging of the NSO daemon and the NSO Java VM/Python VM is controlled by different mechanisms. During development, we typically want to turn on the `developer-log`. The sample `ncs.conf` that comes with the NSO release has log settings suitable for development, while the `ncs.conf` created by a System Install are suitable for production deployment.
-
-NSO logs in `/logs` in your running directory, (depends on your settings in `ncs.conf`). You might want the log files to be stored somewhere else. See man `ncs.conf` for details on how to configure the various logs. Below is a list of the most useful log files:
-
-* `ncs.log` : NCS daemon log. See [Log Messages and Formats](log-messages-and-formats.md). Can be configured to Syslog.
-* `ncserr.log.1`_,_ `ncserr.log.idx`_,_ `ncserr.log.siz`: if the NSO daemon has a problem. this contains debug information relevant to support. The content can be displayed with `ncs --printlog ncserr.log`.
-* `audit.log`: central audit log covering all northbound interfaces. See [Log Messages and Formats](log-messages-and-formats.md). Can be configured to Syslog.
-* `localhost:8080.access`: all HTTP requests to the daemon. This is an access log for the embedded Web server. This file adheres to the Common Log Format, as defined by Apache and others. This log is not enabled by default and is not rotated, i.e. use logrotate(8). Can be configured to Syslog.
-* `devel.log`: developer-log is a debug log for troubleshooting user-written code. This log is enabled by default and is not rotated, i.e. use logrotate(8). This log shall be used in combination with the `java-vm` or `python-vm` logs. The user code logs in the VM logs and the corresponding library logs in `devel.log`. Disable this log in production systems. Can be configured to Syslog.\
- \
- You can manage this log and set its logging level in `ncs.conf`.
-
- ```xml
-
- true
-
- ${NCS_LOG_DIR}/devel.log
- false
-
-
- true
-
-
- trace
- ```
-* `ncs-java-vm`_._`log`_,_ `ncs-python-vm.log`: logger for code running in Java or Python VM, for example, service applications. Developers writing Java and Python code use this log (in combination with devel.log) for debugging. Both Java and Python log levels can be set from their respective VM settings in, for example, the CLI.
-
- ```cli
- admin@ncs(config)# python-vm logging level level-info
- admin@ncs(config)# java-vm java-logging logger com.tailf.maapi level level-info
- ```
-* `netconf.log`_,_ `snmp.log`: Log for northbound agents. Can be configured to Syslog.
-* `rollbackNNNNN`: All NSO commits generate a corresponding rollback file. The maximum number of rollback files and file numbering can be configured in `ncs.conf`.
-* `xpath.trace`: XPATH is used in many places, for example, XML templates. This log file shows the evaluation of all XPATH expressions and can be enabled in the `ncs.conf`.
-
- ```xml
-
- true
- ${NCS_LOG_DIR}/xpath.trace
-
- ```
-
- To debug XPATH for a template, use the pipe target `debug` in the CLI instead.
-
- ```cli
- admin@ncs(config)# commit | debug template
- ```
-* `ned-cisco-ios-xr-pe1.trace` (for example): if device trace is turned on a trace file will be created per device. The file location is not configured in `ncs.conf` but is configured when the device trace is turned on, for example in the CLI.
-
- ```cli
- admin@ncs(config)# devices device r0 trace pretty
- ```
-* Progress trace log: When a transaction or action is applied, NSO emits specific progress events. These events can be displayed and recorded in a number of different ways, either in CLI with the pipe target `details` on a commit, or by writing it to a log file. You can read more about it in the [Progress Trace](../../../development/advanced-development/progress-trace.md).
-* Transaction error log: log for collecting information on failed transactions that lead to either a CDB boot error or a runtime transaction failure. The default is `false` (disabled). More information about the log is available in the Manual Pages under [Configuration Parameters](../../../resources/man/ncs.conf.5.md#configuration-parameters) (see `logs/transaction-error-log`).
-* Upgrade log: log containing information about CDB upgrade. The log is enabled by default and not rotated (i.e., use logrotate). With the NSO example set, the following examples populate the log in the `logs/upgrade.log` file: [examples.ncs/device-management/ned-yang-revision](https://github.com/NSO-developer/nso-examples/tree/6.6/device-management/ned-yang-revision), [examples.ncs/high-availability/upgrade-basic](https://github.com/NSO-developer/nso-examples/tree/6.6/high-availability/upgrade-basic), [examples.ncs/high-availability/upgrade-cluster](https://github.com/NSO-developer/nso-examples/tree/6.6/high-availability/upgrade-cluster), and [examples.ncs/service-management/upgrade-service](https://github.com/NSO-developer/nso-examples/tree/6.6/service-management/upgrade-service). More information about the log is available in the Manual Pages under [Configuration Parameters](../../../resources/man/ncs.conf.5.md#configuration-parameters) (see `logs/upgrade-log)`.
-
-### Syslog
-
-NSO can syslog to a local Syslog. See `man ncs.conf` how to configure the Syslog settings. All Syslog messages are documented in Log Messages. The `ncs.conf` also lets you decide which of the logs should go into Syslog: `ncs.log, devel.log, netconf.log, snmp.log, audit.log, WebUI access log`. There is also a possibility to integrate with `rsyslog` to log the NCS, developer, audit, netconf, SNMP, and WebUI access logs to syslog with the facility set to daemon in `ncs.conf`. For reference, see the `upgrade-l2` example [examples.ncs/high-availability/hcc](https://github.com/NSO-developer/nso-examples/tree/6.6/high-availability/hcc) .
-
-Below is an example of Syslog configuration:
-
-```xml
-
- daemon
-
-
-
- true
-
- ./logs/ncs.log
- true
-
-
- true
-
-
-```
-
-Log messages are described on the link below:
-
-{% content-ref url="log-messages-and-formats.md" %}
-[log-messages-and-formats.md](log-messages-and-formats.md)
-{% endcontent-ref %}
-
-### NSO Alarms
-
-NSO generates alarms for serious problems that must be remedied. Alarms are available over all the northbound interfaces and exist at the path `/alarms`. NSO alarms are managed as any other alarms by the general NSO Alarm Manager, see the specific section on the alarm manager in order to understand the general alarm mechanisms.
-
-The NSO alarm manager also presents a northbound SNMP view, alarms can be retrieved as an alarm table, and alarm state changes are reported as SNMP Notifications. See the "NSO Northbound" documentation on how to configure the SNMP Agent.
-
-This is also documented in the example [examples.ncs/northbound-interfaces/snmp-alarm](https://github.com/NSO-developer/nso-examples/tree/6.6/northbound-interfaces/snmp-alarm).
-
-Alarms are described on the link below:
-
-{% content-ref url="alarms.md" %}
-[alarms.md](alarms.md)
-{% endcontent-ref %}
-
-### Tracing in NSO
-
-Tracing enables observability across NSO operations by tagging requests with unique identifiers. NSO allows for using Trace Context (recommended) and Trace ID while the `label` commit parameter can be used to correlate events. These allow tracking of requests across service invocations, internal operations, and downstream device configurations.
-
-#### **Trace Context (Recommended)**
-
-NSO supports Trace Context based on the [W3C Trace Context specification](https://www.w3.org/TR/trace-context/), which is the recommended approach for distributed request tracing. This allows tracing information to flow between systems using standardized headers.
-
-When using Trace Context:
-
-* Trace information is carried in the `traceparent` and `tracestate` attributes.
-* The trace ID is a UUID (RFC 4122) and is automatically generated and enforced.
-* Trace Context is propagated automatically across NSO operations, including LSA setups and commit queues.
-* There is no need to pass the trace ID manually as a commit parameter.
-* It is supported across all major northbound protocols: NETCONF, RESTCONF, JSON-RPC, CLI, and MAAPI.
-* Trace data appears in logs and trace files, enabling consistent request tracking across services and systems.
-
-{% hint style="info" %}
-When Trace Context is used, NSO handles tracing internally in compliance with W3C standards. Using an explicit `trace-id` commit parameter is therefore neither needed nor recommended.
-{% endhint %}
-
-#### Trace ID
-
-NSO can issue a unique Trace ID per northbound request, visible in logs and trace headers. This Trace ID can be used to follow the request from service invocation to configuration changes pushed to any device affected by the change. The Trace ID may either be passed in from an external client or generated by NSO. Note that:
-
-* Trace ID is enabled by default.
-* Trace ID is propagated downwards in [LSA](../../advanced-topics/layered-service-architecture.md) setups and is fully integrated with commit queues.
-* Trace ID can be passed to NSO over NETCONF, RESTCONF, JSON-RPC, CLI, or MAAPI as a commit parameter.
-* If Trace ID is not given as a commit parameter, NSO will generate one.
-
-The generated Trace ID is an array of 16 random bytes, encoded as a 32-character hexadecimal string, in accordance with [Trace ID](https://www.w3.org/TR/trace-context/#trace-id). NSO also accepts arbitrary strings, but the UUID format (as per [RFC 4122](https://datatracker.ietf.org/doc/html/rfc4122), a 128-bit value formatted as a 36-character hyphenated string: xxxxxxxx-xxxx-Mxxx-Nxxx-xxxxxxxxxxxx, e.g., `550e8400-e29b-41d4-a716-446655440000`) is the preferred approach for creating Trace IDs.
-
-For RESTCONF requests, this generated Trace ID will be communicated back to the requesting client as an HTTP header called `X-Cisco-NSO-Trace-ID`. The `trace-id` query parameter can also be used with RPCs and actions to relay a trace-id from northbound requests.
-
-For NETCONF, the Trace ID will be returned as an attribute called `trace-id`.
-
-Trace ID will appear in relevant log entries and trace file headers on the form `trace-id=...`.
-
-## Disaster Management
-
-This section describes a number of disaster scenarios and recommends various actions to take in the different disaster variants.
-
-### NSO Fails to Start
-
-CDB keeps its data in four files `A.cdb`, `C.cdb`, `O.cdb` and `S.cdb`. If NSO is stopped, these four files can be copied, and the copy is then a full backup of CDB.
-
-Furthermore, if neither files exist in the configured CDB directory, CDB will attempt to initialize from all files in the CDB directory with the suffix `.xml`.
-
-Thus, there exist two different ways to re-initiate CDB from a previously known good state, either from `.xml` files or from a CDB backup. The `.xml` files would typically be used to reinstall factory defaults whereas a CDB backup could be used in more complex scenarios.
-
-If the `S.cdb` file has become inconsistent or has been removed, all commit queue items will be removed, and devices not yet processed out of sync. For such an event, appropriate alarms will be raised on the devices and any service instance that has unprocessed device changes will be set in the failed state.
-
-When NSO starts and fails to initialize, the following exit codes can occur:
-
-* Exit codes 1 and 19 mean that an internal error has occurred. A text message should be in the logs, or if the error occurred at startup before logging had been activated, on standard error (standard output if NSO was started with `--foreground --verbose`). Generally, the message will only be meaningful to the NSO developers, and an internal error should always be reported to support.
-* Exit codes 2 and 3 are only used for the NCS control commands (see the section COMMUNICATING WITH NCS in the [ncs(1)](../../../resources/man/ncs.1.md) in Manual Pages manual page) and mean that the command failed due to timeout. Code 2 is used when the initial connect to NSO didn't succeed within 5 seconds (or the `TryTime` if given), while code 3 means that the NSO daemon did not complete the command within the time given by the `--timeout` option.
-* Exit code 10 means that one of the init files in the CDB directory was faulty in some way — further information in the log.
-* Exit code 11 means that the CDB configuration was changed in an unsupported way. This will only happen when an existing database is detected, which was created with another configuration than the current in `ncs.conf`.
-* Exit code 13 means that the schema change caused an upgrade, but for some reason, the upgrade failed. Details are in the log. The way to recover from this situation is either to correct the problem or to re-install the old schema (`fxs`) files.
-* Exit code 14 means that the schema change caused an upgrade, but for some reason the upgrade failed, corrupting the database in the process. This is rare and usually caused by a bug. To recover, either start from an empty database with the new schema, or re-install the old schema files and apply a backup.
-* Exit code 15 means that `A.cdb` or `C.cdb` is corrupt in a non-recoverable way. Remove the files and re-start using a backup or init files.
-* Exit code 16 means that CDB ran into an unrecoverable file error (such as running out of space on the device while performing journal compaction).
-* Exit code 20 means that NSO failed to bind a socket.
-* Exit code 21 means that some NSO configuration file is faulty. More information is in the logs.
-* Exit code 22 indicates an NSO installation-related problem, e.g., that the user does not have read access to some library files, or that some file is missing.
-
-If the NSO daemon starts normally, the exit code is 0.
-
-If the AAA database is broken, NSO will start but with no authorization rules loaded. This means that all write access to the configuration is denied. The NSO CLI can be started with a flag `ncs_cli --noaaa` that will allow full unauthorized access to the configuration.
-
-### NSO Failure After Startup
-
-NSO attempts to handle all runtime problems without terminating, e.g., by restarting specific components. However, there are some cases where this is not possible, described below. When NSO is started the default way, i.e. as a daemon, the exit codes will of course not be available, but see the `--foreground` option in the [ncs(1)](../../../resources/man/ncs.1.md) Manual Page.
-
-* **Out of memory**: If NSO is unable to allocate memory, it will exit by calling abort(3). This will generate an exit code, as for reception of the SIGABRT signal - e.g. if NSO is started from a shell script, it will see 134, as the exit code (128 + the signal number).
-* **Out of file descriptors for accept(2)**: If NSO fails to accept a TCP connection due to lack of file descriptors, it will log this and then exit with code 25. To avoid this problem, make sure that the process and system-wide file descriptor limits are set high enough, and if needed configure session limits in `ncs.conf`. The out-of-file descriptors issue may also manifest itself in that applications are no longer able to open new file descriptors.\
- \
- In many Linux systems, the default limit is 1024, but if we, for example, assume that there are four northbound interface ports, CLI, RESTCONF, SNMP, WebUI/JSON-RPC, or similar, plus a few hundred IPC ports, x 1024 == 5120. But one might as well use the next power of two, 8192, to be on the safe side.
-
- \
- Several application issues can contribute to consuming extra ports. In the scope of an NSO application that could, for example, be a script application that invokes CLI command or a callback daemon application that does not close the connection socket as it should.
-
- A commonly used command for changing the maximum number of open file descriptors is `ulimit -n [limit]`. Commands such as `netstat` and `lsof` can be useful to debug file descriptor-related issues.
-
-### Transaction Commit Failure
-
-When the system is updated, NSO executes a two-phase commit protocol towards the different participating databases including CDB. If a participant fails in the `commit()` phase although the participant succeeded in the preparation phase, the configuration is possibly in an inconsistent state.
-
-When NSO considers the configuration to be in an inconsistent state, operations will continue. It is still possible to use NETCONF, the CLI, and all other northbound management agents. The CLI has a different prompt which reflects that the system is considered to be in an inconsistent state and also the Web UI shows this:
-
-```
- -- WARNING ------------------------------------------------------
- Running db may be inconsistent. Enter private configuration mode and
- install a rollback configuration or load a saved configuration.
- ------------------------------------------------------------------
-```
-
-The MAAPI API has two interface functions that can be used to set and retrieve the consistency status, those are `maapi_set_running_db_status()` and `maapi_get_running_db_status()` corresponding. This API can thus be used to manually reset the consistency state. The only alternative to reset the state to a consistent state is by reloading the entire configuration.
-
-## Backup and Restore
-
-All parts of the NSO installation can be backed up and restored with standard file system backup procedures.
-
-The most convenient way to do backup and restore is to use the `ncs-backup` command. In that case, the following procedure is used.
-
-### Take a Backup
-
-NSO Backup backs up the database (CDB) files, state files, config files, and rollback files from the installation directory. To take a complete backup (for disaster recovery), use:
-
-```bash
-# ncs-backup
-```
-
-The backup will be stored in the "run directory", by default `/var/opt/ncs`, as `/var/opt/ncs/backups/ncs-VERSION@DATETIME.backup`.
-
-For more information on backup, refer to the [ncs-backup(1)](../../../resources/man/ncs-backup.1.md) in Manual Pages.
-
-### Restore a Backup
-
-NSO Restore is performed if you would like to switch back to a previous good state or restore a backup.
-
-It is always advisable to stop NSO before performing a restore.
-
-1. First stop NSO if NSO is not stopped yet.
-
- ```
- systemctl stop ncs
- ```
-2. Restore the backup.
-
- ```bash
- ncs-backup --restore
- ```
-
- \
- Select the backup to be restored from the available list of backups. The configuration and database with run-time state files are restored in `/etc/ncs` and `/var/opt/ncs`.
-3. Start NSO.
-
- ```
- systemctl start ncs
- ```
-
-## Rollbacks
-
-NSO supports creating rollback files during the commit of a transaction that allows for rolling back the introduced changes. Rollbacks do not come without a cost and should be disabled if the functionality is not going to be used. Enabling rollbacks impacts both the time it takes to commit a change and requires sufficient storage on disk.
-
-Rollback files contain a set of headers and the data required to restore the changes that were made when the rollback was created. One of the header fields includes a unique rollback ID that can be used to address the rollback file independent of the rollback numbering format.
-
-The use of rollbacks from the supported APIs and the CLI is documented in the documentation for the given API.
-
-### `ncs.conf` Config for Rollback
-
-As described [earlier](./#configuring-nso), NSO is configured through the configuration file, `ncs.conf`. In that file, we have the following items related to rollbacks:
-
-* `/ncs-config/rollback/enabled`: If set to `true`, then a rollback file will be created whenever the running configuration is modified.
-* `/ncs-config/rollback/directory`: Location where rollback files will be created.
-* `/ncs-config/rollback/history-size`: The number of old rollback files to save.
-
-## Troubleshooting
-
-New users can face problems when they start to use NSO. If you face an issue, reach out to our support team regardless if your problem is listed here or not.
-
-{% hint style="success" %}
-A useful tool in this regard is the `ncs-collect-tech-report` tool, which is the Bash script that comes with the product. It collects all log files, CDB backup, and several debug dumps as a TAR file. Note that it works only with a System Install.
-
-```bash
-root@linux:/# ncs-collect-tech-report --full
-```
-{% endhint %}
-
-Some noteworthy issues are covered here.
-
-
-
-Installation Problems: Error Messages During Installation
-
-* **Error**
-
- ```
- tar: Skipping to next header
- gzip: stdin: invalid compressed data--format violated
- ```
-
-- **Impact**\
- The resulting installation is incomplete.
-
-* **Cause**\
- This happens if the installation program has been damaged, most likely because it has been downloaded in ASCII mode.
-
-- **Resolution**\
- Remove the installation directory. Download a new copy of NSO from our servers. Make sure you use binary transfer mode every step of the way.
-
-
-
-
-
-Problem Starting NSO: NSO Terminating with GLIBC Error
-
-* **Error**
-
- ```
- Internal error: Open failed: /lib/tls/libc.so.6: version
- `GLIBC_2.3.4' not found (required by
- .../lib/ncs/priv/util/syst_drv.so)
- ```
-
-- **Impact**\
- NSO terminates immediately with a message similar to the one above.
-
-* **Cause**\
- This happens if you are running on a very old Linux version. The GNU libc (GLIBC) version is older than 2.3.4, which was released in 2004.
-
-- **Resolution**\
- Use a newer Linux system, or upgrade the GLIBC installation.
-
-
-
-
-
-Problem in Running Examples: The netconf-console Program Fails
-
-* **Error**\
- You must install the Python SSH implementation Paramiko in order to use SSH.
-
-- **Impact**\
- Sending NETCONF commands and queries with `netconf-console` fails, while it works using `netconf-console-tcp`.
-
-* **Cause**\
- The `netconf-console` command is implemented using the Python programming language. It depends on the Python SSHv2 implementation Paramiko. Since you are seeing this message, your operating system doesn't have the Python module Paramiko installed.
-
-- **Resolution**\
- Install Paramiko using the instructions from [https://www.paramiko.org](https://www.paramiko.org/).\
- \
- When properly installed, you will be able to import the Paramiko module without error messages.
-
- ```bash
- $ python
- ...
- >>> import paramiko
- >>>
- ```
-
- \
- Exit the Python interpreter with Ctrl+D.
-
-* **Workaround**\
- A workaround is to use `netconf-console-tcp`. It uses TCP instead of SSH and doesn't require Paramiko. Note that TCP traffic is not encrypted.
-
-
-
-
-
-Problems Using and Developing Services
-
-If you encounter issues while loading service packages, creating service instances, or developing service models, templates, and code, you can consult the Troubleshooting section in [Implementing Services](../../../development/core-concepts/implementing-services.md).
-
-
-
-### General Troubleshooting Strategies
-
-If you have trouble starting or running NSO, examples, or the clients you write, here are some troubleshooting tips.
-
-
-
-Transcript
-
-When contacting support, it often helps the support engineer to understand what you are trying to achieve if you copy-paste the commands, responses, and shell scripts that you used to trigger the problem, together with any CLI outputs and logs produced by NSO.
-
-
-
-
-
-Source ENV Variables
-
-If you have problems executing `ncs` commands, make sure you source the `ncsrc` script in your NSO directory (your path may be different than the one in the example if you are using a local install), which sets the required environmental variables.
-
-```bash
-$ source /etc/profile.d/ncs.sh
-```
-
-
-
-
-
-Log Files
-
-To find out what NSO is/was doing, browsing NSO log files is often helpful. In the examples, they are called `devel.log`, `ncs.log`, `audit.log`. If you are working with your own system, make sure that the log files are enabled in `ncs.conf`. They are already enabled in all the examples. You can read more about how to enable and inspect various logs in the [Logging](./#ug.ncs_sys_mgmt.logging) section.
-
-
-
-
-
-Verify HW Resources
-
-Both high CPU utilization and a lack of memory can negatively affect the performance of NSO. You can use commands such as `top` to examine resource utilization, and `free -mh` to see the amount of free and consumed memory. A common symptom of a lack of memory is NSO or Java-VM restarting. A sufficient amount of disk space is also required for CDB persistence and logs, so you can also check disk space with `df -h` command. In case there is enough space on the disk and you still encounter ENOSPC errors, check the inode usage with `df -i` command.
-
-
-
-
-
-Status
-
-NSO will give you a comprehensive status of daemon status, YANG modules, loaded packages, MIBs, active user sessions, CDB locks, and more if you run:
-
-```bash
-$ ncs --status
-```
-
-NSO status information is also available as operational data under `/ncs-state`.
-
-
-
-
-
-Check Data Provider
-
-If you are implementing a data provider (for operational or configuration data), you can verify that it works for all possible data items using:
-
-```bash
-$ ncs --check-callbacks
-```
-
-
-
-
-
-Debug Dump
-
-If you suspect you have experienced a bug in NSO, or NSO told you so, you can give Support a debug dump to help us diagnose the problem. It contains a lot of status information (including a full `ncs --status report`) and some internal state information. This information is only readable and comprehensible to the NSO development team, so send the dump to your support contact. A debug dump is created using:
-
-```bash
-$ ncs --debug-dump mydump1
-```
-
-Just as in CSI on TV, the information must be collected as soon as possible after the event. Many interesting traces will wash away with time, or stay undetected if there are lots of irrelevant facts in the dump.
-
-If NSO gets stuck while terminating, it can optionally create a debug dump after being stuck for 60 seconds. To enable this mechanism, set the environment variable `$NCS_DEBUG_DUMP_NAME` to a filename of your choice.
-
-
-
-
-
-Error Log
-
-Another thing you can do in case you suspect that you have experienced a bug in NSO is to collect the error log. The logged information is only readable and comprehensible to the NSO development team, so send the log to your support contact. The log actually consists of a number of files called `ncserr.log.*` - make sure to provide them all.
-
-
-
-
-
-System Dump
-
-If NSO aborts due to failure to allocate memory (see [Disaster Management](./#ug.ncs_sys_mgmt.disaster)), and you believe that this is due to a memory leak in NSO, creating one or more debug dumps as described above (before NSO aborts) will produce the most useful information for Support. If this is not possible, NSO will produce a system dump by default before aborting, unless `DISABLE_NCS_DUMP` is set.
-
-The default system dump file name is `ncs_crash.dump` and it could be changed by setting the environment variable `$NCS_DUMP` before starting NSO. The dumped information is only comprehensible to the NSO development team, so send the dump to your support contact.
-
-
-
-
-
-System Call Trace
-
-To catch certain types of problems, especially relating to system start and configuration, the operating system's system call trace can be invaluable. This tool is called `strace`/`ktrace`/`truss`. Please send the result to your support contact for a diagnosis.
-
-By running the instructions below.
-
-Linux:
-
-```bash
-# strace -f -o mylog1.strace -s 1024 ncs ...
-```
-
-BSD:
-
-```bash
-# ktrace -ad -f mylog1.ktrace ncs ...
-# kdump -f mylog1.ktrace > mylog1.kdump
-```
-
-Solaris:
-
-```bash
-# truss -f -o mylog1.truss ncs ...
-```
-
-
diff --git a/administration/management/system-management/alarms.md b/administration/management/system-management/alarms.md
deleted file mode 100644
index 82fcb77f..00000000
--- a/administration/management/system-management/alarms.md
+++ /dev/null
@@ -1,693 +0,0 @@
-# Alarm Types
-
-```
-alarm-type
- cdb-offload-threshold-too-low
- certificate-expiration
- ha-alarm
- ha-node-down-alarm
- ha-primary-down
- ha-secondary-down
- ncs-cluster-alarm
- cluster-subscriber-failure
- ncs-dev-manager-alarm
- abort-error
- auto-configure-failed
- commit-through-queue-blocked
- commit-through-queue-failed
- commit-through-queue-failed-transiently
- commit-through-queue-rollback-failed
- configuration-error
- connection-failure
- final-commit-error
- missing-transaction-id
- ned-live-tree-connection-failure
- out-of-sync
- revision-error
- ncs-package-alarm
- package-load-failure
- package-operation-failure
- ncs-service-manager-alarm
- service-activation-failure
- ncs-snmp-notification-receiver-alarm
- receiver-configuration-error
- time-violation-alarm
- transaction-lock-time-violation
-```
-
-## Alarm Type Descriptions
-
-
-
-abort-error
-
-abort-error
-
-* **Initial Perceived Severity**
- major
-* **Description**
- An error happened while aborting or reverting a transaction. Device's
-configuration is likely to be inconsistent with the NCS CDB.
-* **Recommended Action**
- Inspect the configuration difference with compare-config,
- resolve conflicts with sync-from or sync-to if any.
-* **Clear Condition(s)**
- If NCS achieves sync with the device, or receives a transaction
- id for a netconf session towards the device, the alarm is cleared.
-* **Alarm Message(s)**
- * `Device {dev} is locked`
- * `Device {dev} is southbound locked`
- * `abort error`
-
-
-
-
-
-alarm-type
-
-alarm-type
-
-* **Description**
- Base identity for alarm types. A unique identification of the
-fault, not including the managed object. Alarm types are used
-to identify if alarms indicate the same problem or not, for
-lookup into external alarm documentation, etc. Different
-managed object types and instances can share alarm types. If
-the same managed object reports the same alarm type, it is to
-be considered to be the same alarm. The alarm type is a
-simplification of the different X.733 and 3GPP alarm IRP alarm
-correlation mechanisms and it allows for hierarchical
-extensions.
-A 'specific-problem' can be used in addition to the alarm type
-in order to have different alarm types based on information not
-known at design-time, such as values in textual SNMP
-Notification varbinds.
-
-
-
-
-
-auto-configure-failed
-
-auto-configure-failed
-
-* **Initial Perceived Severity**
- warning
-* **Description**
- Device auto-configure exhausted its retry attempts trying
-to connect and sync the device.
-* **Recommended Action**
- Make sure that NCS can connect to the device and then sync
- the configuration.
-* **Clear Condition(s)**
- If NCS achieves sync with the device, the alarm is cleared.
-* **Alarm Message(s)**
- * `Auto-configure has exhausted its retry attempts`
-
-
-
-
-
-cdb-offload-threshold-too-low
-
-cdb-offload-threshold-too-low
-
-* **Initial Perceived Severity**
- warning
-* **Description**
- The CDB offload threshold configuration is set too low, causing
-the CDB memory footprint to reach the threshold even when there
-is no offloadable data present in the memory.
-* **Recommended Action**
- If system memory is sufficient, increase the threshold value, otherwise
- increase the system memory capacity.
-* **Clear Condition(s)**
- This alarm is cleared when CDB offload can lower the CDB memory
- footprint below the configured threshold value.
-* **Alarm Message(s)**
- * `CDB offload threshold is too low`
-
-
-
-
-
-certificate-expiration
-
-certificate-expiration
-
-* **Description**
- The certificate is nearing its expiry or has already expired.
-The severity depends on the time left to expiry, it ranges from
-warning to critical.
-* **Recommended Action**
- Replace certificate.
-* **Clear Condition(s)**
- This alarm is cleared when the certificate is no longer loaded.
-* **Alarm Message(s)**
- * `Certificate expires in less than {days} day(s)`
- * `Certificate has expired`
-
-
-
-
-
-cluster-subscriber-failure
-
-cluster-subscriber-failure
-
-* **Initial Perceived Severity**
- critical
-* **Description**
- Failure to establish a notification subscription towards
-a remote node.
-* **Recommended Action**
- Verify IP connectivity between cluster nodes.
-* **Clear Condition(s)**
- This alarm is cleared if NCS succeeds to establish a
- subscription towards the remote node, or when the subscription
- is explicitly stopped.
-* **Alarm Message(s)**
- * `Failed to establish netconf notification
- subscription to node ~s, stream ~s`
- * `Commit queue items with remote nodes will not receive required
- event notifications.`
-
-
-
-
-
-commit-through-queue-blocked
-
-commit-through-queue-blocked
-
-* **Initial Perceived Severity**
- warning
-* **Description**
- A commit was queued behind a queue item waiting to be able to
-connect to one of its devices. This is potentially dangerous
-since one unreachable device can potentially fill up the commit
-queue indefinitely.
-* **Clear Condition(s)**
- An alarm raised due to a transient error will be cleared
- when NCS is able to reconnect to the device.
-* **Alarm Message(s)**
- * `Commit queue item ~p is blocked because item ~p cannot connect to ~s`
-
-
-
-
-
-commit-through-queue-failed
-
-commit-through-queue-failed
-
-* **Initial Perceived Severity**
- critical
-* **Description**
- A queued commit failed.
-* **Recommended Action**
- Resolve with rollback if possible.
-* **Clear Condition(s)**
- This alarm is not cleared.
-* **Alarm Message(s)**
- * `Failed to authenticate towards device {device}: {reason}`
- * `Device {dev} is locked`
- * `{Reason}`
- * `Device {dev} is southbound locked`
- * `Commit queue item {CqId} rollback invoked`
- * `Commit queue item {CqId} has failed: Operation failed because:
- inconsistent database`
- * `Remote commit queue item ~p cannot be unlocked:
- cluster node not configured correctly`
-
-
-
-
-
-commit-through-queue-failed-transiently
-
-commit-through-queue-failed-transiently
-
-* **Initial Perceived Severity**
- critical
-* **Description**
- A queued commit failed as it exhausted its retry attempts
-on transient errors.
-* **Recommended Action**
- Resolve with rollback if possible.
-* **Clear Condition(s)**
- This alarm is not cleared.
-* **Alarm Message(s)**
- * `Failed to connect to device {dev}: {reason}`
- * `Connection to {dev} timed out`
- * `Failed to authenticate towards device {device}: {reason}`
- * `The configuration database is locked for device {dev}: {reason}`
- * `the configuration database is locked by session {id} {identification}`
- * `the configuration database is locked by session {id} {identification}`
- * `{Dev}: Device is locked in a {Op} operation by session {session-id}`
- * `resource denied`
- * `Commit queue item {CqId} rollback invoked`
- * `Commit queue item {CqId} has failed: Operation failed because:
- inconsistent database`
- * `Remote commit queue item ~p cannot be unlocked:
- cluster node not configured correctly`
-
-
-
-
-
-commit-through-queue-rollback-failed
-
-commit-through-queue-rollback-failed
-
-* **Initial Perceived Severity**
- critical
-* **Description**
- Rollback of a commit-queue item failed.
-* **Recommended Action**
- Investigate the status of the device and resolve the
- situation by issuing the appropriate action, i.e., service
- redeploy or a sync operation.
-* **Clear Condition(s)**
- This alarm is not cleared.
-* **Alarm Message(s)**
- * `{Reason}`
-
-
-
-
-
-configuration-error
-
-configuration-error
-
-* **Initial Perceived Severity**
- critical
-* **Description**
- Invalid configuration of NCS managed device, NCS cannot recognize
-parameters needed to connect to device.
-* **Recommended Action**
- Verify that the configuration parameters defined in
- tailf-ncs-devices.yang submodule are consistent for this device.
-* **Clear Condition(s)**
- The alarm is cleared when NCS reads the configuration
- parameters for the device, and is raised again if the
- parameters are invalid.
-* **Alarm Message(s)**
- * `Failed to resolve IP address for {dev}`
- * `the configuration database is locked by session {id} {identification}`
- * `{Reason}`
- * `Resource {resource} doesn't exist`
-
-
-
-
-
-connection-failure
-
-connection-failure
-
-* **Initial Perceived Severity**
- major
-* **Description**
- NCS failed to connect to a managed device before the timeout expired.
-* **Recommended Action**
- Verify address, port, authentication, check that the device is up
- and running. If the error occurs intermittently, increase
- connect-timeout.
-* **Clear Condition(s)**
- If NCS successfully reconnects to the device, the alarm is cleared.
-* **Alarm Message(s)**
- * `The connection to {dev} was closed`
- * `Failed to connect to device {dev}: {reason}`
-
-
-
-
-
-final-commit-error
-
-final-commit-error
-
-* **Initial Perceived Severity**
- critical
-* **Description**
- A managed device validated a configuration change, but failed to
-commit. When this happens, NCS and the device are out of sync.
-* **Recommended Action**
- Reconcile by comparing and sync-from or sync-to.
-* **Clear Condition(s)**
- If NCS achieves sync with the device, the alarm is cleared.
-* **Alarm Message(s)**
- * `The connection to {dev} was closed`
- * `External error in the NED implementation for device {dev}: {reason}`
- * `Internal error in the NED NCS framework affecting device {dev}: {reason}`
-
-
-
-
-
-ha-alarm
-
-ha-alarm
-
-* **Description**
- Base type for all alarms related to high availablity.
-This is never reported, sub-identities for the specific
-high availability alarms are used in the alarms.
-
-
-
-
-
-ha-node-down-alarm
-
-ha-node-down-alarm
-
-* **Description**
- Base type for all alarms related to nodes going down in
-high availablity. This is never reported, sub-identities
-for the specific node down alarms are used in the alarms.
-
-
-
-
-
-ha-primary-down
-
-ha-primary-down
-
-* **Initial Perceived Severity**
- critical
-* **Description**
- The node lost the connection to the primary node.
-* **Recommended Action**
- Make sure the HA cluster is operational, investigate why
- the primary went down and bring it up again.
-* **Clear Condition(s)**
- This alarm is never automatically cleared and has to be cleared
- manually when the HA cluster has been restored.
-* **Alarm Message(s)**
- * `Lost connection to primary due to: Primary closed connection`
- * `Lost connection to primary due to: Tick timeout`
- * `Lost connection to primary due to: code {Code}`
-
-
-
-
-
-ha-secondary-down
-
-ha-secondary-down
-
-* **Initial Perceived Severity**
- critical
-* **Description**
- The node lost the connection to a secondary node.
-* **Recommended Action**
- Investigate why the secondary node went down, fix the
- connectivity issue and reconnect the secondary to the
- HA cluster.
-* **Clear Condition(s)**
- This alarm is cleared when the secondary node is reconnected
- to the HA cluster.
-* **Alarm Message(s)**
- * `Lost connection to secondary`
-
-
-
-
-
-missing-transaction-id
-
-missing-transaction-id
-
-* **Initial Perceived Severity**
- warning
-* **Description**
- A device announced in its NETCONF hello message that
-it supports the transaction-id as defined in
-http://tail-f.com/yang/netconf-monitoring. However when
-NCS tries to read the transaction-id no data is returned.
-The NCS check-sync feature will not work. This is usually
-a case of misconfigured NACM rules on the managed device.
-* **Recommended Action**
- Verify NACM rules on the concerned device.
-* **Clear Condition(s)**
- If NCS successfully reads a transaction id for which
- it had previously failed to do so, the alarm is cleared.
-* **Alarm Message(s)**
- * `{Reason}`
-
-
-
-
-
-ncs-cluster-alarm
-
-ncs-cluster-alarm
-
-* **Description**
- Base type for all alarms related to cluster.
-This is never reported, sub-identities for the specific
-cluster alarms are used in the alarms.
-
-
-
-
-
-ncs-dev-manager-alarm
-
-ncs-dev-manager-alarm
-
-* **Description**
- Base type for all alarms related to the device manager
-This is never reported, sub-identities for the specific
-device alarms are used in the alarms.
-
-
-
-
-
-ncs-package-alarm
-
-ncs-package-alarm
-
-* **Description**
- Base type for all alarms related to packages.
-This is never reported, sub-identities for the specific
-package alarms are used in the alarms.
-
-
-
-
-
-ncs-service-manager-alarm
-
-ncs-service-manager-alarm
-
-* **Description**
- Base type for all alarms related to the service manager
-This is never reported, sub-identities for the specific
-service alarms are used in the alarms.
-
-
-
-
-
-ncs-snmp-notification-receiver-alarm
-
-ncs-snmp-notification-receiver-alarm
-
-* **Description**
- Base type for SNMP notification receiver Alarms. This is never
-reported, sub-identities for specific SNMP notification receiver
-alarms are used in the alarms.
-
-
-
-
-
-ned-live-tree-connection-failure
-
-ned-live-tree-connection-failure
-
-* **Initial Perceived Severity**
- major
-* **Description**
- NCS failed to connect to a managed device using one of the optional
-live-status-protocol NEDs.
-* **Recommended Action**
- Verify the configuration of the optional NEDs.
- If the error occurs intermittently, increase connect-timeout.
-* **Clear Condition(s)**
- If NCS successfully reconnects to the managed device,
- the alarm is cleared.
-* **Alarm Message(s)**
- * `The connection to {dev} was closed`
- * `Failed to connect to device {dev}: {reason}`
-
-
-
-
-
-out-of-sync
-
-out-of-sync
-
-* **Initial Perceived Severity**
- major
-* **Description**
- A managed device is out of sync with NCS. Usually it means that the
-device has been configured out of band from NCS point of view.
-* **Recommended Action**
- Inspect the difference with compare-config, reconcile by
- invoking sync-from or sync-to.
-* **Clear Condition(s)**
- If NCS achieves sync with the device, the alarm is cleared.
-* **Alarm Message(s)**
- * `Device {dev} is out of sync`
- * `Out of sync due to no-networking or failed commit-queue commits.`
- * `got: ~s expected: ~s.`
-
-
-
-
-
-package-load-failure
-
-package-load-failure
-
-* **Initial Perceived Severity**
- critical
-* **Description**
- NCS failed to load a package.
-* **Recommended Action**
- Check the package for the reason.
-* **Clear Condition(s)**
- If NCS successfully loads a package for which an alarm
- was previously raised, it will be cleared.
-* **Alarm Message(s)**
- * `failed to open file {file}: {str}`
- * `Specific to the concerned package.`
-
-
-
-
-
-package-operation-failure
-
-package-operation-failure
-
-* **Initial Perceived Severity**
- critical
-* **Description**
- A package has some problem with its operation.
-* **Recommended Action**
- Check the package for the reason.
-* **Clear Condition(s)**
- This alarm is not cleared.
-
-
-
-
-
-receiver-configuration-error
-
-receiver-configuration-error
-
-* **Initial Perceived Severity**
- major
-* **Description**
- The snmp-notification-receiver could not setup its configuration,
-either at startup or when reconfigured. SNMP notifications will now
-be missed.
-* **Recommended Action**
- Check the error-message and change the configuration.
-* **Clear Condition(s)**
- This alarm will be cleared when the NCS is configured
- to successfully receive SNMP notifications
-* **Alarm Message(s)**
- * `Configuration has errors.`
-
-
-
-
-
-revision-error
-
-revision-error
-
-* **Initial Perceived Severity**
- major
-* **Description**
- A managed device arrived with a known module, but too new revision.
-* **Recommended Action**
- Upgrade the Device NED using the new YANG revision in order
- to use the new features in the device.
-* **Clear Condition(s)**
- If all device yang modules are supported by NCS,
- the alarm is cleared.
-* **Alarm Message(s)**
- * `The device has YANG module revisions not supported by
- NCS. Use the /devices/device/check-yang-modules
- action to check which modules that are not compatible.`
-
-
-
-
-
-service-activation-failure
-
-service-activation-failure
-
-* **Initial Perceived Severity**
- critical
-* **Description**
- A service failed during re-deploy.
-* **Recommended Action**
- Corrective action and another re-deploy is needed.
-* **Clear Condition(s)**
- If the service is successfully redeployed, the alarm is cleared.
-* **Alarm Message(s)**
- * `Multiple device errors:
-{str}`
-
-
-
-
-
-time-violation-alarm
-
-time-violation-alarm
-
-* **Description**
- Base type for all alarms related to time violations.
-This is never reported, sub-identities for the specific
-time violation alarms are used in the alarms.
-
-
-
-
-
-transaction-lock-time-violation
-
-transaction-lock-time-violation
-
-* **Initial Perceived Severity**
- warning
-* **Description**
- The transaction lock time exceeded its threshold and might be stuck
-in the critical section. This threshold is configured in
-/ncs-config/transaction-lock-time-violation-alarm/timeout.
-* **Recommended Action**
- Investigate if the transaction is stuck and possibly
- interrupt it by closing the user session which it is
- attached to.
-* **Clear Condition(s)**
- This alarm is cleared when the transaction has finished.
-* **Alarm Message(s)**
- * `Transaction lock time exceeded threshold.`
-
-
-
diff --git a/administration/management/system-management/cisco-smart-licensing.md b/administration/management/system-management/cisco-smart-licensing.md
deleted file mode 100644
index 780327df..00000000
--- a/administration/management/system-management/cisco-smart-licensing.md
+++ /dev/null
@@ -1,106 +0,0 @@
----
-description: Manage purchase and licensing of Cisco software.
----
-
-# Cisco Smart Licensing
-
-[Cisco Smart Licensing](https://www.cisco.com/web/ordering/smart-software-licensing/index.html) is a cloud-based approach to licensing, and it simplifies the purchase, deployment, and management of Cisco software assets. Entitlements are purchased through a Cisco account via Cisco Commerce Workspace (CCW) and are immediately deposited into a Smart Account for usage. This eliminates the need to install license files on every device. Products that are smart-enabled communicate directly to Cisco to report consumption.
-
-Cisco Smart Software Manager (CSSM) enables the management of software licenses and Smart Account from a single portal. The interface allows you to activate your product, manage entitlements, and renew and upgrade software.
-
-A functioning Smart Account is required to complete the registration process. For detailed information about CSSM, see [Cisco Smart Software Manager](https://www.cisco.com/c/en/us/buy/smart-accounts/software-manager.html).
-
-## Smart Accounts and Virtual Accounts
-
-A virtual account exists as a sub-account within the Smart Account. Virtual accounts are a customer-defined structure based on organizational layout, business function, geography, or any defined hierarchy. They are created and maintained by the Smart Account administrator(s).
-
-Visit [Cisco Cisco Software Central](https://software.cisco.com/) to learn about how to create and manage Smart Accounts.
-
-### Request a Smart Account
-
-The creation of a new Smart Account is a one-time event, and subsequent management of users is a capability provided through the tool. To request a Smart Account, visit [Cisco Cisco Software Central](https://software.cisco.com/) and take the following steps:
-
-1. After logging in, select **Request a Smart Account** in the Administration section.
-
-
-2. Select the type of Smart Account to create. There are two options: (a) Individual Smart Account requiring agreement to represent your company. By creating this Smart Account, you agree to authorization to create and manage product and service entitlements, users, and roles on behalf of your organization. (b) Create the account on behalf of someone else.
-
-
-3. Provide the required domain identifier and the preferred account name.
-
-
-4. The account request will be pending approval of the Account Domain Identifier. A subsequent email will be sent to the requester to complete the setup process.
-
-
-
-### Adding Users to a Smart Account
-
-Smart Account user management is available in the **Administration** section of [Cisco Cisco Software Central](https://software.cisco.com/). Take the following steps to add a new user to a Smart Account:
-
-1. After logging in Select **Manage Smart Account** in the **Administration** section.
-
-
-2. Choose the **Users** tab.
-
-
-3. Select **New User** and follow the instructions in the wizard to add a new user.
-
-
-
-### Create a License Registration Token
-
-1. To create a new token, log into CSSM and select the appropriate Virtual Account.
-
-
-2. Click on the **Smart Licenses** link to enter CSSM.
-
-
-3. In CSSM click on **New Token**.
-
-
-4. Follow the dialog to provide a description, expiration, and export compliance applicability before accepting the terms and responsibilities. Click on **Create Token** to continue.
-
-
-5. Click on the new token.
-
-
-6. Copy the token from the dialogue window into your clipboard.
-
-
-7. Go to the NSO CLI and provide the token to the `license smart register idtoken` command:
-
- ```cli
- admin@ncs# license smart register idtoken YzY2YjFlOTYtOWYzZi00MDg1...
- Registration process in progress.
- Use the 'show license status' command to check the progress and result.
- ```
-
-### Notes on Configuring Smart Licensing
-
-* If `ncs.conf` contains configuration for any of java-executable, java-options, override-url/url, or proxy/url under the configure path `/ncs-config/smart-license/smart-agent/` any corresponding configuration done via the CLI is ignored.
-* The smart licensing component of NSO runs its own Java virtual machine. Usually, the default Java options are sufficient:
-
- ```yang
- leaf java-options {
- tailf:info "Smart licensing Java VM start options";
- type string;
- default "-Xmx64M -Xms16M
- -Djava.security.egd=file:/dev/./urandom";
- description
- "Options which NCS will use when starting
- the Java VM.";}
- ```
-
- \
- If you, for some reason, need to modify the Java options, remember to include the default values as found in the YANG model.
-
-### Validation and Troubleshooting
-
-#### Available `show` and `debug` Commands
-
-* `show license all`: Displays all information.
-* `show license status`: Displays status information.
-* `show license summary`: Displays summary.
-* `show license tech`: Displays license tech support information.
-* `show license usage`: Displays usage information.
-* `debug smart_lic all`: All available Smart Licensing debug flags.
diff --git a/administration/management/system-management/log-messages-and-formats.md b/administration/management/system-management/log-messages-and-formats.md
deleted file mode 100644
index 2435a193..00000000
--- a/administration/management/system-management/log-messages-and-formats.md
+++ /dev/null
@@ -1,3602 +0,0 @@
-# Log Messages and Formats
-
-
-
-
-AAA_LOAD_FAIL
-
-AAA_LOAD_FAIL
-
-* **Severity**
- `CRIT`
-* **Description**
- Failed to load the AAA data, it could be that an external db is misbehaving or AAA is mounted/populated badly
-* **Format String**
- `"Failed to load AAA: ~s"`
-
-
-
-
-
-
-ABORT_CAND_COMMIT
-
-ABORT_CAND_COMMIT
-
-* **Severity**
- `INFO`
-* **Description**
- Aborting candidate commit, request from user, reverting configuration.
-* **Format String**
- `"Aborting candidate commit, request from user, reverting configuration."`
-
-
-
-
-
-
-ABORT_CAND_COMMIT_REBOOT
-
-ABORT_CAND_COMMIT_REBOOT
-
-* **Severity**
- `INFO`
-* **Description**
- ConfD restarted while having a ongoing candidate commit timer, reverting configuration.
-* **Format String**
- `"ConfD restarted while having a ongoing candidate commit timer, reverting configuration."`
-
-
-
-
-
-
-ABORT_CAND_COMMIT_TERM
-
-ABORT_CAND_COMMIT_TERM
-
-* **Severity**
- `INFO`
-* **Description**
- Candidate commit session terminated, reverting configuration.
-* **Format String**
- `"Candidate commit session terminated, reverting configuration."`
-
-
-
-
-
-
-ABORT_CAND_COMMIT_TIMER
-
-ABORT_CAND_COMMIT_TIMER
-
-* **Severity**
- `INFO`
-* **Description**
- Candidate commit timer expired, reverting configuration.
-* **Format String**
- `"Candidate commit timer expired, reverting configuration."`
-
-
-
-
-
-
-ACCEPT_FATAL
-
-ACCEPT_FATAL
-
-* **Severity**
- `CRIT`
-* **Description**
- ConfD encountered an OS-specific error indicating that networking support is unavailable.
-* **Format String**
- `"Fatal error for accept() - ~s"`
-
-
-
-
-
-
-ACCEPT_FDLIMIT
-
-ACCEPT_FDLIMIT
-
-* **Severity**
- `CRIT`
-* **Description**
- ConfD failed to accept a connection due to reaching the process or system-wide file descriptor limit.
-* **Format String**
- `"Out of file descriptors for accept() - ~s limit reached"`
-
-
-
-
-
-
-AUTH_LOGIN_FAIL
-
-AUTH_LOGIN_FAIL
-
-* **Severity**
- `INFO`
-* **Description**
- A user failed to log in to ConfD.
-* **Format String**
- `"login failed via ~s from ~s with ~s: ~s"`
-
-
-
-
-
-
-AUTH_LOGIN_SUCCESS
-
-AUTH_LOGIN_SUCCESS
-
-* **Severity**
- `INFO`
-* **Description**
- A user logged into ConfD.
-* **Format String**
- `"logged in to ~s via ~s from ~s with ~s using ~s authentication"`
-
-
-
-
-
-
-AUTH_LOGOUT
-
-AUTH_LOGOUT
-
-* **Severity**
- `INFO`
-* **Description**
- A user was logged out from ConfD.
-* **Format String**
- `"logged out <~s> user"`
-
-
-
-
-
-
-BADCONFIG
-
-BADCONFIG
-
-* **Severity**
- `CRIT`
-* **Description**
- confd.conf contained bad data.
-* **Format String**
- `"Bad configuration: ~s:~s: ~s"`
-
-
-
-
-
-
-BAD_DEPENDENCY
-
-BAD_DEPENDENCY
-
-* **Severity**
- `ERR`
-* **Description**
- A dependency was not found
-* **Format String**
- `"The dependency node '~s' for node '~s' in module '~s' does not exist"`
-
-
-
-
-
-
-BAD_NS_HASH
-
-BAD_NS_HASH
-
-* **Severity**
- `CRIT`
-* **Description**
- Two namespaces have the same hash value. The namespace hashvalue MUST be unique. You can pass the flag --nshash to confdc when linking the .xso files to force another value for the namespace hash.
-* **Format String**
- `"~s"`
-
-
-
-
-
-
-BIND_ERR
-
-BIND_ERR
-
-* **Severity**
- `CRIT`
-* **Description**
- ConfD failed to bind to one of the internally used listen sockets.
-* **Format String**
- `"~s"`
-
-
-
-
-
-
-BRIDGE_DIED
-
-BRIDGE_DIED
-
-* **Severity**
- `ERR`
-* **Description**
- ConfD is configured to start the confd_aaa_bridge and the C program died.
-* **Format String**
- `"confd_aaa_bridge died - ~s"`
-
-
-
-
-
-
-CANDIDATE_BAD_FILE_FORMAT
-
-CANDIDATE_BAD_FILE_FORMAT
-
-* **Severity**
- `WARNING`
-* **Description**
- The candidate database file has a bad format. The candidate database is reset to the empty database.
-* **Format String**
- `"Bad format found in candidate db file ~s; resetting candidate"`
-
-
-
-
-
-
-CANDIDATE_CORRUPT_FILE
-
-CANDIDATE_CORRUPT_FILE
-
-* **Severity**
- `WARNING`
-* **Description**
- The candidate database file is corrupt and cannot be read. The candidate database is reset to the empty database.
-* **Format String**
- `"Corrupt candidate db file ~s; resetting candidate"`
-
-
-
-
-
-
-CAND_COMMIT_ROLLBACK_DONE
-
-CAND_COMMIT_ROLLBACK_DONE
-
-* **Severity**
- `INFO`
-* **Description**
- Candidate commit rollback done
-* **Format String**
- `"Candidate commit rollback done"`
-
-
-
-
-
-
-CAND_COMMIT_ROLLBACK_FAILURE
-
-CAND_COMMIT_ROLLBACK_FAILURE
-
-* **Severity**
- `ERR`
-* **Description**
- Failed to rollback candidate commit
-* **Format String**
- `"Failed to rollback candidate commit due to: ~s"`
-
-
-
-
-
-
-CDB_BACKUP
-
-CDB_BACKUP
-
-* **Severity**
- `INFO`
-* **Description**
- CDB data backed up after migration to a new storage backend.
-* **Format String**
- `"CDB: ~s backed up to ~s"`
-
-
-
-
-
-
-CDB_BOOT_ERR
-
-CDB_BOOT_ERR
-
-* **Severity**
- `CRIT`
-* **Description**
- CDB failed to start. Some grave error in the cdb data files prevented CDB from starting - a recovery from backup is necessary.
-* **Format String**
- `"CDB boot error: ~s"`
-
-
-
-
-
-
-CDB_CLIENT_TIMEOUT
-
-CDB_CLIENT_TIMEOUT
-
-* **Severity**
- `ERR`
-* **Description**
- A CDB client failed to answer within the timeout period. The client will be disconnected.
-* **Format String**
- `"CDB client (~s) timed out, waiting for ~s"`
-
-
-
-
-
-
-CDB_CONFIG_LOST
-
-CDB_CONFIG_LOST
-
-* **Severity**
- `INFO`
-* **Description**
- CDB found it's data files but no schema file. CDB recovers by starting from an empty database.
-* **Format String**
- `"CDB: lost config, deleting DB"`
-
-
-
-
-
-
-CDB_DB_LOST
-
-CDB_DB_LOST
-
-* **Severity**
- `INFO`
-* **Description**
- CDB found it's data schema file but not it's data file. CDB recovers by starting from an empty database.
-* **Format String**
- `"CDB: lost DB, deleting old config"`
-
-
-
-
-
-
-CDB_FATAL_ERROR
-
-CDB_FATAL_ERROR
-
-* **Severity**
- `CRIT`
-* **Description**
- CDB encounterad an unrecoverable error
-* **Format String**
- `"fatal error in CDB: ~s"`
-
-
-
-
-
-
-CDB_INIT_LOAD
-
-CDB_INIT_LOAD
-
-* **Severity**
- `INFO`
-* **Description**
- CDB is processing an initialization file.
-* **Format String**
- `"CDB load: processing file: ~s"`
-
-
-
-
-
-
-CDB_MIGRATE
-
-CDB_MIGRATE
-
-* **Severity**
- `INFO`
-* **Description**
- CDB data migration to a new storage backend.
-* **Format String**
- `"CDB: migrate ~s to ~s"`
-
-
-
-
-
-
-CDB_OFFLOAD
-
-CDB_OFFLOAD
-
-* **Severity**
- `DEBUG`
-* **Description**
- CDB data offload started.
-* **Format String**
- `"CDB: offload ~s from memory"`
-
-
-
-
-
-
-CDB_OP_INIT
-
-CDB_OP_INIT
-
-* **Severity**
- `ERR`
-* **Description**
- The operational DB was deleted and re-initialized (because of upgrade or corrupt file)
-* **Format String**
- `"CDB: Operational DB re-initialized"`
-
-
-
-
-
-
-CDB_STALE_BACKUP
-
-CDB_STALE_BACKUP
-
-* **Severity**
- `INFO`
-* **Description**
- CDB backup data left on disk after migration that can be removed to free up disk space.
-* **Format String**
- `"CDB: ~s backup file(s) occupying ~sMiB, remove to free up disk space: ~s"`
-
-
-
-
-
-
-CDB_UPGRADE_FAILED
-
-CDB_UPGRADE_FAILED
-
-* **Severity**
- `ERR`
-* **Description**
- Automatic CDB upgrade failed. This means that the data model has been changed in a non-supported way.
-* **Format String**
- `"CDB: Upgrade failed: ~s"`
-
-
-
-
-
-
-CGI_REQUEST
-
-CGI_REQUEST
-
-* **Severity**
- `INFO`
-* **Description**
- CGI script requested.
-* **Format String**
- `"CGI: '~s' script with method ~s"`
-
-
-
-
-
-
-CHANGE_USER
-
-CHANGE_USER
-
-* **Severity**
- `INFO`
-* **Description**
- A NETCONF request to change user for authorization was succesfully done.
-* **Format String**
- `"changed user to ~s, groups ~s"`
-
-
-
-
-
-
-CLI_CMD
-
-CLI_CMD
-
-* **Severity**
- `INFO`
-* **Description**
- User executed a CLI command.
-* **Format String**
- `"CLI '~s'"`
-
-
-
-
-
-
-CLI_CMD_ABORTED
-
-CLI_CMD_ABORTED
-
-* **Severity**
- `INFO`
-* **Description**
- CLI command aborted.
-* **Format String**
- `"CLI aborted"`
-
-
-
-
-
-
-CLI_CMD_DONE
-
-CLI_CMD_DONE
-
-* **Severity**
- `INFO`
-* **Description**
- CLI command finished successfully.
-* **Format String**
- `"CLI done"`
-
-
-
-
-
-
-CLI_DENIED
-
-CLI_DENIED
-
-* **Severity**
- `INFO`
-* **Description**
- User was denied to execute a CLI command due to permissions.
-* **Format String**
- `"CLI denied '~s'"`
-
-
-
-
-
-
-COMMIT_INFO
-
-COMMIT_INFO
-
-* **Severity**
- `INFO`
-* **Description**
- Information about configuration changes committed to the running data store.
-* **Format String**
- `"commit ~s"`
-
-
-
-
-
-
-COMMIT_QUEUE_CORRUPT
-
-COMMIT_QUEUE_CORRUPT
-
-* **Severity**
- `ERR`
-* **Description**
- Failed to load commit queue. ConfD recovers by starting from an empty commit queue.
-* **Format String**
- `"Resetting commit queue due do inconsistent or corrupt data."`
-
-
-
-
-
-
-CONFIG_CHANGE
-
-CONFIG_CHANGE
-
-* **Severity**
- `INFO`
-* **Description**
- A change to ConfD configuration has taken place, e.g., by a reload of the configuration file
-* **Format String**
- `"ConfD configuration change: ~s"`
-
-
-
-
-
-
-CONFIG_DEPRECATED
-
-CONFIG_DEPRECATED
-
-* **Severity**
- `WARNING`
-* **Description**
- confd.conf contains a deprecated value
-* **Format String**
- `"Config value is deprecated: ~s"`
-
-
-
-
-
-
-CONFIG_OBSOLETE
-
-CONFIG_OBSOLETE
-
-* **Severity**
- `WARNING`
-* **Description**
- confd.conf contains an obsolete value
-* **Format String**
- `"Config value is obsolete: ~s"`
-
-
-
-
-
-
-CONFIG_TRANSACTION_LIMIT
-
-CONFIG_TRANSACTION_LIMIT
-
-* **Severity**
- `INFO`
-* **Description**
- Configuration transaction limit reached, rejected new transaction request.
-* **Format String**
- `"Configuration transaction limit of type '~s' reached, rejected new transaction request"`
-
-
-
-
-
-
-CONSULT_FILE
-
-CONSULT_FILE
-
-* **Severity**
- `INFO`
-* **Description**
- ConfD is reading its configuration file.
-* **Format String**
- `"Consulting daemon configuration file ~s"`
-
-
-
-
-
-
-CRYPTO_KEYS_FAILED_LOADING
-
-CRYPTO_KEYS_FAILED_LOADING
-
-* **Severity**
- `INFO`
-* **Description**
- Crypto keys failed to load because the old active generation is missing in the new configuration.
-* **Format String**
- `"Cannot reload crypto keys since the old active generation is missing in the new list of keys."`
-
-
-
-
-
-
-DAEMON_DIED
-
-DAEMON_DIED
-
-* **Severity**
- `CRIT`
-* **Description**
- An external database daemon closed its control socket.
-* **Format String**
- `"Daemon ~s died"`
-
-
-
-
-
-
-DAEMON_TIMEOUT
-
-DAEMON_TIMEOUT
-
-* **Severity**
- `CRIT`
-* **Description**
- An external database daemon did not respond to a query.
-* **Format String**
- `"Daemon ~s timed out"`
-
-
-
-
-
-
-DEVEL_AAA
-
-DEVEL_AAA
-
-* **Severity**
- `INFO`
-* **Description**
- Developer aaa log message
-* **Format String**
- `"~s"`
-
-
-
-
-
-
-DEVEL_CAPI
-
-DEVEL_CAPI
-
-* **Severity**
- `INFO`
-* **Description**
- Developer C api log message
-* **Format String**
- `"~s"`
-
-
-
-
-
-
-DEVEL_CDB
-
-DEVEL_CDB
-
-* **Severity**
- `INFO`
-* **Description**
- Developer CDB log message
-* **Format String**
- `"~s"`
-
-
-
-
-
-
-DEVEL_CONFD
-
-DEVEL_CONFD
-
-* **Severity**
- `INFO`
-* **Description**
- Developer ConfD log message
-* **Format String**
- `"~s"`
-
-
-
-
-
-
-DEVEL_ECONFD
-
-DEVEL_ECONFD
-
-* **Severity**
- `INFO`
-* **Description**
- Developer econfd api log message
-* **Format String**
- `"~s"`
-
-
-
-
-
-
-DEVEL_SLS
-
-DEVEL_SLS
-
-* **Severity**
- `INFO`
-* **Description**
- Developer smartlicensing api log message
-* **Format String**
- `"~s"`
-
-
-
-
-
-
-DEVEL_SNMPA
-
-DEVEL_SNMPA
-
-* **Severity**
- `INFO`
-* **Description**
- Developer snmp agent log message
-* **Format String**
- `"~s"`
-
-
-
-
-
-
-DEVEL_SNMPGW
-
-DEVEL_SNMPGW
-
-* **Severity**
- `INFO`
-* **Description**
- Developer snmp GW log message
-* **Format String**
- `"~s"`
-
-
-
-
-
-
-DEVEL_WEBUI
-
-DEVEL_WEBUI
-
-* **Severity**
- `INFO`
-* **Description**
- Developer webui log message
-* **Format String**
- `"~s"`
-
-
-
-
-
-
-DUPLICATE_MODULE_NAME
-
-DUPLICATE_MODULE_NAME
-
-* **Severity**
- `CRIT`
-* **Description**
- Duplicate module name found.
-* **Format String**
- `"The module name '~s' is both defined in '~s' and '~s'."`
-
-
-
-
-
-
-DUPLICATE_NAMESPACE
-
-DUPLICATE_NAMESPACE
-
-* **Severity**
- `CRIT`
-* **Description**
- Duplicate namespace found.
-* **Format String**
- `"The namespace ~s is defined in both module ~s and ~s."`
-
-
-
-
-
-
-DUPLICATE_PREFIX
-
-DUPLICATE_PREFIX
-
-* **Severity**
- `CRIT`
-* **Description**
- Duplicate prefix found.
-* **Format String**
- `"The prefix ~s is defined in both ~s and ~s."`
-
-
-
-
-
-
-ERRLOG_SIZE_CHANGED
-
-ERRLOG_SIZE_CHANGED
-
-* **Severity**
- `INFO`
-* **Description**
- Notify change of log size for error log
-* **Format String**
- `"Changing size of error log (~s) to ~s (was ~s)"`
-
-
-
-
-
-
-EVENT_SOCKET_TIMEOUT
-
-EVENT_SOCKET_TIMEOUT
-
-* **Severity**
- `CRIT`
-* **Description**
- An event notification subscriber did not reply within the configured timeout period
-* **Format String**
- `"Event notification subscriber with bitmask ~s timed out, waiting for ~s"`
-
-
-
-
-
-
-EVENT_SOCKET_WRITE_BLOCK
-
-EVENT_SOCKET_WRITE_BLOCK
-
-* **Severity**
- `CRIT`
-* **Description**
- Write on an event socket blocked for too long time
-* **Format String**
- `"~s"`
-
-
-
-
-
-
-EXEC_WHEN_CIRCULAR_DEPENDENCY
-
-EXEC_WHEN_CIRCULAR_DEPENDENCY
-
-* **Severity**
- `WARNING`
-* **Description**
- An error occurred while evaluating a when-expression.
-* **Format String**
- `"When-expression evaluation error: circular dependency in ~s"`
-
-
-
-
-
-
-EXTAUTH_BAD_RET
-
-EXTAUTH_BAD_RET
-
-* **Severity**
- `ERR`
-* **Description**
- Authentication is external and the external program returned badly formatted data.
-* **Format String**
- `"External auth program (user=~s) ret bad output: ~s"`
-
-
-
-
-
-
-EXT_AUTH_2FA
-
-EXT_AUTH_2FA
-
-* **Severity**
- `INFO`
-* **Description**
- External challenge sent to a user.
-* **Format String**
- `"external challenge sent to ~s from ~s with ~s"`
-
-
-
-
-
-
-EXT_AUTH_2FA_FAIL
-
-EXT_AUTH_2FA_FAIL
-
-* **Severity**
- `INFO`
-* **Description**
- External challenge authentication failed for a user.
-* **Format String**
- `"external challenge authentication failed via ~s from ~s with ~s: ~s"`
-
-
-
-
-
-
-EXT_AUTH_2FA_SUCCESS
-
-EXT_AUTH_2FA_SUCCESS
-
-* **Severity**
- `INFO`
-* **Description**
- An external challenge authenticated user logged in.
-* **Format String**
- `"external challenge authentication succeeded via ~s from ~s with ~s, member of groups: ~s~s"`
-
-
-
-
-
-
-EXT_AUTH_FAIL
-
-EXT_AUTH_FAIL
-
-* **Severity**
- `INFO`
-* **Description**
- External authentication failed for a user.
-* **Format String**
- `"external authentication failed via ~s from ~s with ~s: ~s"`
-
-
-
-
-
-
-EXT_AUTH_SUCCESS
-
-EXT_AUTH_SUCCESS
-
-* **Severity**
- `INFO`
-* **Description**
- An externally authenticated user logged in.
-* **Format String**
- `"external authentication succeeded via ~s from ~s with ~s, member of groups: ~s~s"`
-
-
-
-
-
-
-EXT_AUTH_TOKEN_FAIL
-
-EXT_AUTH_TOKEN_FAIL
-
-* **Severity**
- `INFO`
-* **Description**
- External token authentication failed for a user.
-* **Format String**
- `"external token authentication failed via ~s from ~s with ~s: ~s"`
-
-
-
-
-
-
-EXT_AUTH_TOKEN_SUCCESS
-
-EXT_AUTH_TOKEN_SUCCESS
-
-* **Severity**
- `INFO`
-* **Description**
- An externally token authenticated user logged in.
-* **Format String**
- `"external token authentication succeeded via ~s from ~s with ~s, member of groups: ~s~s"`
-
-
-
-
-
-
-EXT_BIND_ERR
-
-EXT_BIND_ERR
-
-* **Severity**
- `CRIT`
-* **Description**
- ConfD failed to bind to one of the externally visible listen sockets.
-* **Format String**
- `"~s"`
-
-
-
-
-
-
-FILE_ERROR
-
-FILE_ERROR
-
-* **Severity**
- `CRIT`
-* **Description**
- File error
-* **Format String**
- `"~s: ~s"`
-
-
-
-
-
-
-FILE_LOAD
-
-FILE_LOAD
-
-* **Severity**
- `DEBUG`
-* **Description**
- System loaded a file.
-* **Format String**
- `"Loaded file ~s"`
-
-
-
-
-
-
-FILE_LOADING
-
-FILE_LOADING
-
-* **Severity**
- `DEBUG`
-* **Description**
- System starts to load a file.
-* **Format String**
- `"Loading file ~s"`
-
-
-
-
-
-
-FILE_LOAD_ERR
-
-FILE_LOAD_ERR
-
-* **Severity**
- `CRIT`
-* **Description**
- System tried to load a file in its load path and failed.
-* **Format String**
- `"Failed to load file ~s: ~s"`
-
-
-
-
-
-
-FXS_MISMATCH
-
-FXS_MISMATCH
-
-* **Severity**
- `ERR`
-* **Description**
- A secondary connected to a primary where the fxs files are different
-* **Format String**
- `"Fxs mismatch, secondary is not allowed"`
-
-
-
-
-
-
-GROUP_ASSIGN
-
-GROUP_ASSIGN
-
-* **Severity**
- `INFO`
-* **Description**
- A user was assigned to a set of groups.
-* **Format String**
- `"assigned to groups: ~s"`
-
-
-
-
-
-
-GROUP_NO_ASSIGN
-
-GROUP_NO_ASSIGN
-
-* **Severity**
- `INFO`
-* **Description**
- A user was logged in but wasn't assigned to any groups at all.
-* **Format String**
- `"Not assigned to any groups - all access is denied"`
-
-
-
-
-
-
-HA_BAD_VSN
-
-HA_BAD_VSN
-
-* **Severity**
- `ERR`
-* **Description**
- A secondary connected to a primary with an incompatible HA protocol version
-* **Format String**
- `"Incompatible HA version (~s, expected ~s), secondary is not allowed"`
-
-
-
-
-
-
-HA_DUPLICATE_NODEID
-
-HA_DUPLICATE_NODEID
-
-* **Severity**
- `ERR`
-* **Description**
- A secondary arrived with a node id which already exists
-* **Format String**
- `"Nodeid ~s already exists"`
-
-
-
-
-
-
-HA_FAILED_CONNECT
-
-HA_FAILED_CONNECT
-
-* **Severity**
- `ERR`
-* **Description**
- An attempted library become secondary call failed because the secondary couldn't connect to the primary
-* **Format String**
- `"Failed to connect to primary: ~s"`
-
-
-
-
-
-
-HA_SECONDARY_KILLED
-
-HA_SECONDARY_KILLED
-
-* **Severity**
- `ERR`
-* **Description**
- A secondary node didn't produce its ticks
-* **Format String**
- `"Secondary ~s killed due to no ticks"`
-
-
-
-
-
-
-INTERNAL_ERROR
-
-INTERNAL_ERROR
-
-* **Severity**
- `CRIT`
-* **Description**
- A ConfD internal error - should be reported to support@tail-f.com.
-* **Format String**
- `"Internal error: ~s"`
-
-
-
-
-
-
-IPC_CAPA_DBG_DUMP_DENIED
-
-IPC_CAPA_DBG_DUMP_DENIED
-
-* **Severity**
- `INFO`
-* **Description**
- Debug dump denied for user - capability not enabled.
-* **Format String**
- `"Debug dump denied for user '~s' - capability not enabled."`
-
-
-
-
-
-
-IPC_CAPA_DBG_DUMP_GRANTED
-
-IPC_CAPA_DBG_DUMP_GRANTED
-
-* **Severity**
- `INFO`
-* **Description**
- Debug dump allowed for user.
-* **Format String**
- `"Debug dump allowed for user '~s'."`
-
-
-
-
-
-
-JIT_ENABLED
-
-JIT_ENABLED
-
-* **Severity**
- `INFO`
-* **Description**
- Show if JIT is enabled.
-* **Format String**
- `"JIT ~s"`
-
-
-
-
-
-
-JSONRPC_LOG_MSG
-
-JSONRPC_LOG_MSG
-
-* **Severity**
- `INFO`
-* **Description**
- JSON-RPC traffic log message
-* **Format String**
- `"JSON-RPC traffic log: ~s"`
-
-
-
-
-
-
-JSONRPC_REQUEST
-
-JSONRPC_REQUEST
-
-* **Severity**
- `INFO`
-* **Description**
- JSON-RPC method requested.
-* **Format String**
- `"JSON-RPC: '~s' with JSON params ~s"`
-
-
-
-
-
-
-JSONRPC_REQUEST_ABSOLUTE_TIMEOUT
-
-JSONRPC_REQUEST_ABSOLUTE_TIMEOUT
-
-* **Severity**
- `INFO`
-* **Description**
- JSON-RPC absolute timeout.
-* **Format String**
- `"Stopping session due to absolute timeout: ~s"`
-
-
-
-
-
-
-JSONRPC_REQUEST_IDLE_TIMEOUT
-
-JSONRPC_REQUEST_IDLE_TIMEOUT
-
-* **Severity**
- `INFO`
-* **Description**
- JSON-RPC idle timeout.
-* **Format String**
- `"Stopping session due to idle timeout: ~s"`
-
-
-
-
-
-
-JSONRPC_WARN_MSG
-
-JSONRPC_WARN_MSG
-
-* **Severity**
- `WARNING`
-* **Description**
- JSON-RPC warning message
-* **Format String**
- `"JSON-RPC warning: ~s"`
-
-
-
-
-
-
-KICKER_MISSING_SCHEMA
-
-KICKER_MISSING_SCHEMA
-
-* **Severity**
- `INFO`
-* **Description**
- Failed to load kicker schema
-* **Format String**
- `"Failed to load kicker schema"`
-
-
-
-
-
-
-LIB_BAD_SIZES
-
-LIB_BAD_SIZES
-
-* **Severity**
- `ERR`
-* **Description**
- An application connecting to ConfD used a library version that can't handle the depth and number of keys used by the data model.
-* **Format String**
- `"Got connect from library with insufficient keypath depth/keys support (~s/~s, needs ~s/~s)"`
-
-
-
-
-
-
-LIB_BAD_VSN
-
-LIB_BAD_VSN
-
-* **Severity**
- `ERR`
-* **Description**
- An application connecting to ConfD used a library version that doesn't match the ConfD version (e.g. old version of the client library).
-* **Format String**
- `"Got library connect from wrong version (~s, expected ~s)"`
-
-
-
-
-
-
-LIB_NO_ACCESS
-
-LIB_NO_ACCESS
-
-* **Severity**
- `ERR`
-* **Description**
- Access check failure occurred when an application connected to ConfD.
-* **Format String**
- `"Got library connect with failed access check: ~s"`
-
-
-
-
-
-
-LISTENER_INFO
-
-LISTENER_INFO
-
-* **Severity**
- `INFO`
-* **Description**
- ConfD starts or stops to listen for incoming connections.
-* **Format String**
- `"~s to listen for ~s on ~s:~s"`
-
-
-
-
-
-
-LOCAL_AUTH_FAIL
-
-LOCAL_AUTH_FAIL
-
-* **Severity**
- `INFO`
-* **Description**
- Authentication for a locally configured user failed.
-* **Format String**
- `"local authentication failed via ~s from ~s with ~s: ~s"`
-
-
-
-
-
-
-LOCAL_AUTH_FAIL_BADPASS
-
-LOCAL_AUTH_FAIL_BADPASS
-
-* **Severity**
- `INFO`
-* **Description**
- Authentication for a locally configured user failed due to providing bad password.
-* **Format String**
- `"local authentication failed via ~s from ~s with ~s: ~s"`
-
-
-
-
-
-
-LOCAL_AUTH_FAIL_NOUSER
-
-LOCAL_AUTH_FAIL_NOUSER
-
-* **Severity**
- `INFO`
-* **Description**
- Authentication for a locally configured user failed due to user not found.
-* **Format String**
- `"local authentication failed via ~s from ~s with ~s: ~s"`
-
-
-
-
-
-
-LOCAL_AUTH_SUCCESS
-
-LOCAL_AUTH_SUCCESS
-
-* **Severity**
- `INFO`
-* **Description**
- A locally authenticated user logged in.
-* **Format String**
- `"local authentication succeeded via ~s from ~s with ~s, member of groups: ~s"`
-
-
-
-
-
-
-LOCAL_IPC_ACCESS_DENIED
-
-LOCAL_IPC_ACCESS_DENIED
-
-* **Severity**
- `INFO`
-* **Description**
- Local IPC access denied for user.
-* **Format String**
- `"Local IPC access denied for user ~s connecting from ~s"`
-
-
-
-
-
-
-LOGGING_DEST_CHANGED
-
-LOGGING_DEST_CHANGED
-
-* **Severity**
- `INFO`
-* **Description**
- The target logfile will change to another file
-* **Format String**
- `"Changing destination of ~s log to ~s"`
-
-
-
-
-
-
-LOGGING_SHUTDOWN
-
-LOGGING_SHUTDOWN
-
-* **Severity**
- `INFO`
-* **Description**
- Logging subsystem terminating
-* **Format String**
- `"Daemon logging terminating, reason: ~s"`
-
-
-
-
-
-
-LOGGING_STARTED
-
-LOGGING_STARTED
-
-* **Severity**
- `INFO`
-* **Description**
- Logging subsystem started
-* **Format String**
- `"Daemon logging started"`
-
-
-
-
-
-
-LOGGING_STARTED_TO
-
-LOGGING_STARTED_TO
-
-* **Severity**
- `INFO`
-* **Description**
- Write logs for a subsystem to a specific file
-* **Format String**
- `"Writing ~s log to ~s"`
-
-
-
-
-
-
-LOGGING_STATUS_CHANGED
-
-LOGGING_STATUS_CHANGED
-
-* **Severity**
- `INFO`
-* **Description**
- Notify a change of logging status (enabled/disabled) for a subsystem
-* **Format String**
- `"~s ~s log"`
-
-
-
-
-
-
-LOGIN_REJECTED
-
-LOGIN_REJECTED
-
-* **Severity**
- `INFO`
-* **Description**
- Authentication for a user was rejected by application callback.
-* **Format String**
- `"~s"`
-
-
-
-
-
-
-MAAPI_LOGOUT
-
-MAAPI_LOGOUT
-
-* **Severity**
- `INFO`
-* **Description**
- A maapi user was logged out.
-* **Format String**
- `"Logged out from maapi ctx=~s (~s)"`
-
-
-
-
-
-
-MAAPI_WRITE_TO_SOCKET_FAIL
-
-MAAPI_WRITE_TO_SOCKET_FAIL
-
-* **Severity**
- `INFO`
-* **Description**
- maapi failed to write to a socket.
-* **Format String**
- `"maapi server failed to write to a socket. Op: ~s Ecode: ~s Error: ~s~s"`
-
-
-
-
-
-
-MISSING_AES256CFB128_SETTINGS
-
-MISSING_AES256CFB128_SETTINGS
-
-* **Severity**
- `ERR`
-* **Description**
- AES256CFB128 keys were not found in confd.conf
-* **Format String**
- `"AES256CFB128 keys were not found in confd.conf"`
-
-
-
-
-
-
-MISSING_AESCFB128_SETTINGS
-
-MISSING_AESCFB128_SETTINGS
-
-* **Severity**
- `ERR`
-* **Description**
- AESCFB128 keys were not found in confd.conf
-* **Format String**
- `"AESCFB128 keys were not found in confd.conf"`
-
-
-
-
-
-
-MISSING_DES3CBC_SETTINGS
-
-MISSING_DES3CBC_SETTINGS
-
-* **Severity**
- `ERR`
-* **Description**
- DES3CBC keys were not found in confd.conf
-* **Format String**
- `"DES3CBC keys were not found in confd.conf"`
-
-
-
-
-
-
-MISSING_NS
-
-MISSING_NS
-
-* **Severity**
- `CRIT`
-* **Description**
- While validating the consistency of the config - a required namespace was missing.
-* **Format String**
- `"The namespace ~s could not be found in the loadPath."`
-
-
-
-
-
-
-MISSING_NS2
-
-MISSING_NS2
-
-* **Severity**
- `CRIT`
-* **Description**
- While validating the consistency of the config - a required namespace was missing.
-* **Format String**
- `"The namespace ~s (referenced by ~s) could not be found in the loadPath."`
-
-
-
-
-
-
-MMAP_SCHEMA_FAIL
-
-MMAP_SCHEMA_FAIL
-
-* **Severity**
- `ERR`
-* **Description**
- Failed to setup the shared memory schema
-* **Format String**
- `"Failed to setup the shared memory schema"`
-
-
-
-
-
-
-NETCONF
-
-NETCONF
-
-* **Severity**
- `INFO`
-* **Description**
- NETCONF traffic log message
-* **Format String**
- `"~s"`
-
-
-
-
-
-
-NETCONF_HDR_ERR
-
-NETCONF_HDR_ERR
-
-* **Severity**
- `ERR`
-* **Description**
- The cleartext header indicating user and groups was badly formatted.
-* **Format String**
- `"Got bad NETCONF TCP header"`
-
-
-
-
-
-
-NIF_LOG
-
-NIF_LOG
-
-* **Severity**
- `INFO`
-* **Description**
- Log message from NIF code.
-* **Format String**
- `"~s: ~s"`
-
-
-
-
-
-
-NOAAA_CLI_LOGIN
-
-NOAAA_CLI_LOGIN
-
-* **Severity**
- `INFO`
-* **Description**
- A user used the --noaaa flag to confd_cli
-* **Format String**
- `"logged in from the CLI with aaa disabled"`
-
-
-
-
-
-
-NOTIFICATION_REPLAY_STORE_FAILURE
-
-NOTIFICATION_REPLAY_STORE_FAILURE
-
-* **Severity**
- `CRIT`
-* **Description**
- A failure occurred in the builtin notification replay store
-* **Format String**
- `"~s"`
-
-
-
-
-
-
-NO_CALLPOINT
-
-NO_CALLPOINT
-
-* **Severity**
- `CRIT`
-* **Description**
- ConfD tried to populate an XML tree but no code had registered under the relevant callpoint.
-* **Format String**
- `"no registration found for callpoint ~s of type=~s"`
-
-
-
-
-
-
-NO_SUCH_IDENTITY
-
-NO_SUCH_IDENTITY
-
-* **Severity**
- `CRIT`
-* **Description**
- The fxs file with the base identity is not loaded
-* **Format String**
- `"The identity ~s in namespace ~s refers to a non-existing base identity ~s in namespace ~s"`
-
-
-
-
-
-
-NO_SUCH_NS
-
-NO_SUCH_NS
-
-* **Severity**
- `CRIT`
-* **Description**
- A nonexistent namespace was referred to. Typically this means that a .fxs was missing from the loadPath.
-* **Format String**
- `"No such namespace ~s, used by ~s"`
-
-
-
-
-
-
-NO_SUCH_TYPE
-
-NO_SUCH_TYPE
-
-* **Severity**
- `CRIT`
-* **Description**
- A nonexistent type was referred to from a ns. Typically this means that a bad version of an .fxs file was found in the loadPath.
-* **Format String**
- `"No such simpleType '~s' in ~s, used by ~s"`
-
-
-
-
-
-
-NS_LOAD_ERR
-
-NS_LOAD_ERR
-
-* **Severity**
- `CRIT`
-* **Description**
- System tried to process a loaded namespace and failed.
-* **Format String**
- `"Failed to process namespace ~s: ~s"`
-
-
-
-
-
-
-NS_LOAD_ERR2
-
-NS_LOAD_ERR2
-
-* **Severity**
- `CRIT`
-* **Description**
- System tried to process a loaded namespace and failed.
-* **Format String**
- `"Failed to process namespaces: ~s"`
-
-
-
-
-
-
-OPEN_LOGFILE
-
-OPEN_LOGFILE
-
-* **Severity**
- `INFO`
-* **Description**
- Indicate target file for certain type of logging
-* **Format String**
- `"Logging subsystem, opening log file '~s' for ~s"`
-
-
-
-
-
-
-PAM_AUTH_FAIL
-
-PAM_AUTH_FAIL
-
-* **Severity**
- `INFO`
-* **Description**
- A user failed to authenticate through PAM.
-* **Format String**
- `"PAM authentication failed via ~s from ~s with ~s: phase ~s, ~s"`
-
-
-
-
-
-
-PAM_AUTH_SUCCESS
-
-PAM_AUTH_SUCCESS
-
-* **Severity**
- `INFO`
-* **Description**
- A PAM authenticated user logged in.
-* **Format String**
- `"pam authentication succeeded via ~s from ~s with ~s"`
-
-
-
-
-
-
-PHASE0_STARTED
-
-PHASE0_STARTED
-
-* **Severity**
- `INFO`
-* **Description**
- ConfD has just started its start phase 0.
-* **Format String**
- `"ConfD phase0 started"`
-
-
-
-
-
-
-PHASE1_STARTED
-
-PHASE1_STARTED
-
-* **Severity**
- `INFO`
-* **Description**
- ConfD has just started its start phase 1.
-* **Format String**
- `"ConfD phase1 started"`
-
-
-
-
-
-
-READ_STATE_FILE_FAILED
-
-READ_STATE_FILE_FAILED
-
-* **Severity**
- `CRIT`
-* **Description**
- Reading of a state file failed
-* **Format String**
- `"Reading state file failed: ~s: ~s (~s)"`
-
-
-
-
-
-
-RELOAD
-
-RELOAD
-
-* **Severity**
- `INFO`
-* **Description**
- Reload of daemon configuration has been initiated.
-* **Format String**
- `"Reloading daemon configuration."`
-
-
-
-
-
-
-REOPEN_LOGS
-
-REOPEN_LOGS
-
-* **Severity**
- `INFO`
-* **Description**
- Logging subsystem, reopening log files
-* **Format String**
- `"Logging subsystem, reopening log files"`
-
-
-
-
-
-
-RESTCONF_REQUEST
-
-RESTCONF_REQUEST
-
-* **Severity**
- `INFO`
-* **Description**
- RESTCONF request
-* **Format String**
- `"RESTCONF: request with ~s: ~s"`
-
-
-
-
-
-
-RESTCONF_RESPONSE
-
-RESTCONF_RESPONSE
-
-* **Severity**
- `INFO`
-* **Description**
- RESTCONF response
-* **Format String**
- `"RESTCONF: response with ~s: ~s duration ~s us"`
-
-
-
-
-
-
-REST_AUTH_FAIL
-
-REST_AUTH_FAIL
-
-* **Severity**
- `INFO`
-* **Description**
- Rest authentication for a user failed.
-* **Format String**
- `"rest authentication failed from ~s"`
-
-
-
-
-
-
-REST_AUTH_SUCCESS
-
-REST_AUTH_SUCCESS
-
-* **Severity**
- `INFO`
-* **Description**
- A rest authenticated user logged in.
-* **Format String**
- `"rest authentication succeeded from ~s , member of groups: ~s"`
-
-
-
-
-
-
-REST_REQUEST
-
-REST_REQUEST
-
-* **Severity**
- `INFO`
-* **Description**
- REST request
-* **Format String**
- `"REST: request with ~s: ~s"`
-
-
-
-
-
-
-REST_RESPONSE
-
-REST_RESPONSE
-
-* **Severity**
- `INFO`
-* **Description**
- REST response
-* **Format String**
- `"REST: response with ~s: ~s duration ~s ms"`
-
-
-
-
-
-
-ROLLBACK_FAIL_CREATE
-
-ROLLBACK_FAIL_CREATE
-
-* **Severity**
- `ERR`
-* **Description**
- Error while creating rollback file.
-* **Format String**
- `"Error while creating rollback file: ~s: ~s"`
-
-
-
-
-
-
-ROLLBACK_FAIL_DELETE
-
-ROLLBACK_FAIL_DELETE
-
-* **Severity**
- `ERR`
-* **Description**
- Failed to delete rollback file.
-* **Format String**
- `"Failed to delete rollback file ~s: ~s"`
-
-
-
-
-
-
-ROLLBACK_FAIL_RENAME
-
-ROLLBACK_FAIL_RENAME
-
-* **Severity**
- `ERR`
-* **Description**
- Failed to rename rollback file.
-* **Format String**
- `"Failed to rename rollback file ~s to ~s: ~s"`
-
-
-
-
-
-
-ROLLBACK_FAIL_REPAIR
-
-ROLLBACK_FAIL_REPAIR
-
-* **Severity**
- `ERR`
-* **Description**
- Failed to repair rollback files.
-* **Format String**
- `"Failed to repair rollback files."`
-
-
-
-
-
-
-ROLLBACK_REMOVE
-
-ROLLBACK_REMOVE
-
-* **Severity**
- `INFO`
-* **Description**
- Found half created rollback0 file - removing and creating new.
-* **Format String**
- `"Found half created rollback0 file - removing and creating new"`
-
-
-
-
-
-
-ROLLBACK_REPAIR
-
-ROLLBACK_REPAIR
-
-* **Severity**
- `INFO`
-* **Description**
- Found half created rollback0 file - repairing.
-* **Format String**
- `"Found half created rollback0 file - repairing"`
-
-
-
-
-
-
-SESSION_CREATE
-
-SESSION_CREATE
-
-* **Severity**
- `INFO`
-* **Description**
- A new user session was created
-* **Format String**
- `"created new session via ~s from ~s with ~s"`
-
-
-
-
-
-
-SESSION_LIMIT
-
-SESSION_LIMIT
-
-* **Severity**
- `INFO`
-* **Description**
- Session limit reached, rejected new session request.
-* **Format String**
- `"Session limit of type '~s' reached, rejected new session request"`
-
-
-
-
-
-
-SESSION_MAX_EXCEEDED
-
-SESSION_MAX_EXCEEDED
-
-* **Severity**
- `INFO`
-* **Description**
- A user failed to create a new user sessions due to exceeding sessions limits
-* **Format String**
- `"could not create new session via ~s from ~s with ~s due to session limits"`
-
-
-
-
-
-
-SESSION_TERMINATION
-
-SESSION_TERMINATION
-
-* **Severity**
- `INFO`
-* **Description**
- A user session was terminated due to specified reason
-* **Format String**
- `"terminated session (reason: ~s)"`
-
-
-
-
-
-
-SKIP_FILE_LOADING
-
-SKIP_FILE_LOADING
-
-* **Severity**
- `DEBUG`
-* **Description**
- System skips a file.
-* **Format String**
- `"Skipping file ~s: ~s"`
-
-
-
-
-
-
-SNMP_AUTHENTICATION_FAILED
-
-SNMP_AUTHENTICATION_FAILED
-
-* **Severity**
- `INFO`
-* **Description**
- An SNMP authentication failed.
-* **Format String**
- `"SNMP authentication failed: ~s"`
-
-
-
-
-
-
-SNMP_CANT_LOAD_MIB
-
-SNMP_CANT_LOAD_MIB
-
-* **Severity**
- `CRIT`
-* **Description**
- The SNMP Agent failed to load a MIB file
-* **Format String**
- `"Can't load MIB file: ~s"`
-
-
-
-
-
-
-SNMP_MIB_LOADING
-
-SNMP_MIB_LOADING
-
-* **Severity**
- `DEBUG`
-* **Description**
- SNMP Agent loading a MIB file
-* **Format String**
- `"Loading MIB: ~s"`
-
-
-
-
-
-
-SNMP_NOT_A_TRAP
-
-SNMP_NOT_A_TRAP
-
-* **Severity**
- `INFO`
-* **Description**
- An UDP package was received on the trap receiving port, but it's not an SNMP trap.
-* **Format String**
- `"SNMP gateway: Non-trap received from ~s"`
-
-
-
-
-
-
-SNMP_READ_STATE_FILE_FAILED
-
-SNMP_READ_STATE_FILE_FAILED
-
-* **Severity**
- `CRIT`
-* **Description**
- Read SNMP agent state file failed
-* **Format String**
- `"Read state file failed: ~s: ~s"`
-
-
-
-
-
-
-SNMP_REQUIRES_CDB
-
-SNMP_REQUIRES_CDB
-
-* **Severity**
- `WARNING`
-* **Description**
- The SNMP agent requires CDB to be enabled in order to be started.
-* **Format String**
- `"Can't start SNMP. CDB is not enabled"`
-
-
-
-
-
-
-SNMP_TRAP_NOT_FORWARDED
-
-SNMP_TRAP_NOT_FORWARDED
-
-* **Severity**
- `INFO`
-* **Description**
- An SNMP trap was to be forwarded, but couldn't be.
-* **Format String**
- `"SNMP gateway: Can't forward trap from ~s; ~s"`
-
-
-
-
-
-
-SNMP_TRAP_NOT_RECOGNIZED
-
-SNMP_TRAP_NOT_RECOGNIZED
-
-* **Severity**
- `INFO`
-* **Description**
- An SNMP trap was received on the trap receiving port, but its definition is not known
-* **Format String**
- `"SNMP gateway: Can't forward trap with OID ~s from ~s; There is no notification with this OID in the loaded models."`
-
-
-
-
-
-
-SNMP_TRAP_OPEN_PORT
-
-SNMP_TRAP_OPEN_PORT
-
-* **Severity**
- `ERR`
-* **Description**
- The port for listening to SNMP traps could not be opened.
-* **Format String**
- `"SNMP gateway: Can't open trap listening port ~s: ~s"`
-
-
-
-
-
-
-SNMP_TRAP_UNKNOWN_SENDER
-
-SNMP_TRAP_UNKNOWN_SENDER
-
-* **Severity**
- `INFO`
-* **Description**
- An SNMP trap was to be forwarded, but the sender was not listed in confd.conf.
-* **Format String**
- `"SNMP gateway: Not forwarding trap from ~s; the sender is not recognized"`
-
-
-
-
-
-
-SNMP_TRAP_V1
-
-SNMP_TRAP_V1
-
-* **Severity**
- `INFO`
-* **Description**
- An SNMP v1 trap was received on the trap receiving port, but forwarding v1 traps is not supported.
-* **Format String**
- `"SNMP gateway: V1 trap received from ~s"`
-
-
-
-
-
-
-SNMP_WRITE_STATE_FILE_FAILED
-
-SNMP_WRITE_STATE_FILE_FAILED
-
-* **Severity**
- `WARNING`
-* **Description**
- Write SNMP agent state file failed
-* **Format String**
- `"Write state file failed: ~s: ~s"`
-
-
-
-
-
-
-SSH_HOST_KEY_UNAVAILABLE
-
-SSH_HOST_KEY_UNAVAILABLE
-
-* **Severity**
- `ERR`
-* **Description**
- No SSH host keys available.
-* **Format String**
- `"No SSH host keys available"`
-
-
-
-
-
-
-SSH_SUBSYS_ERR
-
-SSH_SUBSYS_ERR
-
-* **Severity**
- `INFO`
-* **Description**
- Typically errors where the client doesn't properly send the \"subsystem\" command.
-* **Format String**
- `"ssh protocol subsys - ~s"`
-
-
-
-
-
-
-STARTED
-
-STARTED
-
-* **Severity**
- `INFO`
-* **Description**
- ConfD has started.
-* **Format String**
- `"ConfD started vsn: ~s"`
-
-
-
-
-
-
-STARTING
-
-STARTING
-
-* **Severity**
- `INFO`
-* **Description**
- ConfD is starting.
-* **Format String**
- `"Starting ConfD vsn: ~s"`
-
-
-
-
-
-
-STOPPING
-
-STOPPING
-
-* **Severity**
- `INFO`
-* **Description**
- ConfD is stopping (due to e.g. confd --stop).
-* **Format String**
- `"ConfD stopping (~s)"`
-
-
-
-
-
-
-TOKEN_MISMATCH
-
-TOKEN_MISMATCH
-
-* **Severity**
- `ERR`
-* **Description**
- A secondary connected to a primary with a bad auth token
-* **Format String**
- `"Token mismatch, secondary is not allowed"`
-
-
-
-
-
-
-UPGRADE_ABORTED
-
-UPGRADE_ABORTED
-
-* **Severity**
- `INFO`
-* **Description**
- In-service upgrade was aborted.
-* **Format String**
- `"Upgrade aborted"`
-
-
-
-
-
-
-UPGRADE_COMMITTED
-
-UPGRADE_COMMITTED
-
-* **Severity**
- `INFO`
-* **Description**
- In-service upgrade was committed.
-* **Format String**
- `"Upgrade committed"`
-
-
-
-
-
-
-UPGRADE_INIT_STARTED
-
-UPGRADE_INIT_STARTED
-
-* **Severity**
- `INFO`
-* **Description**
- In-service upgrade initialization has started.
-* **Format String**
- `"Upgrade init started"`
-
-
-
-
-
-
-UPGRADE_INIT_SUCCEEDED
-
-UPGRADE_INIT_SUCCEEDED
-
-* **Severity**
- `INFO`
-* **Description**
- In-service upgrade initialization succeeded.
-* **Format String**
- `"Upgrade init succeeded"`
-
-
-
-
-
-
-UPGRADE_PERFORMED
-
-UPGRADE_PERFORMED
-
-* **Severity**
- `INFO`
-* **Description**
- In-service upgrade has been performed (not committed yet).
-* **Format String**
- `"Upgrade performed"`
-
-
-
-
-
-
-WEBUI_LOG_MSG
-
-WEBUI_LOG_MSG
-
-* **Severity**
- `INFO`
-* **Description**
- WebUI access log message
-* **Format String**
- `"WebUI access log: ~s"`
-
-
-
-
-
-
-WEB_ACTION
-
-WEB_ACTION
-
-* **Severity**
- `INFO`
-* **Description**
- User executed a Web UI action.
-* **Format String**
- `"WebUI action '~s'"`
-
-
-
-
-
-
-WEB_CMD
-
-WEB_CMD
-
-* **Severity**
- `INFO`
-* **Description**
- User executed a Web UI command.
-* **Format String**
- `"WebUI cmd '~s'"`
-
-
-
-
-
-
-WEB_COMMIT
-
-WEB_COMMIT
-
-* **Severity**
- `INFO`
-* **Description**
- User performed Web UI commit.
-* **Format String**
- `"WebUI commit ~s"`
-
-
-
-
-
-
-WRITE_STATE_FILE_FAILED
-
-WRITE_STATE_FILE_FAILED
-
-* **Severity**
- `CRIT`
-* **Description**
- Writing of a state file failed
-* **Format String**
- `"Writing state file failed: ~s: ~s (~s)"`
-
-
-
-
-
-
-XPATH_EVAL_ERROR1
-
-XPATH_EVAL_ERROR1
-
-* **Severity**
- `WARNING`
-* **Description**
- An error occurred while evaluating an XPath expression.
-* **Format String**
- `"XPath evaluation error: ~s for ~s"`
-
-
-
-
-
-
-XPATH_EVAL_ERROR2
-
-XPATH_EVAL_ERROR2
-
-* **Severity**
- `WARNING`
-* **Description**
- An error occurred while evaluating an XPath expression.
-* **Format String**
- `"XPath evaluation error: '~s' resulted in ~s for ~s"`
-
-
-
-
-
-
-COMMIT_UN_SYNCED_DEV
-
-COMMIT_UN_SYNCED_DEV
-
-* **Severity**
- `INFO`
-* **Description**
- Data was committed toward a device with bad or unknown sync state
-* **Format String**
- `"Committed data towards device ~s which is out of sync"`
-
-
-
-
-
-
-NCS_DEVICE_OUT_OF_SYNC
-
-NCS_DEVICE_OUT_OF_SYNC
-
-* **Severity**
- `INFO`
-* **Description**
- A check-sync action reported out-of-sync for a device
-* **Format String**
- `"NCS device-out-of-sync Device '~s' Info '~s'"`
-
-
-
-
-
-
-NCS_JAVA_VM_FAIL
-
-NCS_JAVA_VM_FAIL
-
-* **Severity**
- `ERR`
-* **Description**
- The NCS Java VM failure/timeout
-* **Format String**
- `"The NCS Java VM ~s"`
-
-
-
-
-
-
-NCS_JAVA_VM_START
-
-NCS_JAVA_VM_START
-
-* **Severity**
- `INFO`
-* **Description**
- Starting the NCS Java VM
-* **Format String**
- `"Starting the NCS Java VM"`
-
-
-
-
-
-
-NCS_PACKAGE_AUTH_BAD_RET
-
-NCS_PACKAGE_AUTH_BAD_RET
-
-* **Severity**
- `ERR`
-* **Description**
- Package authentication program returned badly formatted data.
-* **Format String**
- `"package authentication using ~s program ret bad output: ~s"`
-
-
-
-
-
-
-NCS_PACKAGE_AUTH_FAIL
-
-NCS_PACKAGE_AUTH_FAIL
-
-* **Severity**
- `INFO`
-* **Description**
- Package authentication failed.
-* **Format String**
- `"package authentication using ~s failed via ~s from ~s with ~s: ~s"`
-
-
-
-
-
-
-NCS_PACKAGE_AUTH_SUCCESS
-
-NCS_PACKAGE_AUTH_SUCCESS
-
-* **Severity**
- `INFO`
-* **Description**
- A package authenticated user logged in.
-* **Format String**
- `"package authentication using ~s succeeded via ~s from ~s with ~s, member of groups: ~s~s"`
-
-
-
-
-
-
-NCS_PACKAGE_BAD_DEPENDENCY
-
-NCS_PACKAGE_BAD_DEPENDENCY
-
-* **Severity**
- `CRIT`
-* **Description**
- Bad NCS package dependency
-* **Format String**
- `"Failed to load NCS package: ~s; required package ~s of version ~s is not present (found ~s)"`
-
-
-
-
-
-
-NCS_PACKAGE_BAD_NCS_VERSION
-
-NCS_PACKAGE_BAD_NCS_VERSION
-
-* **Severity**
- `CRIT`
-* **Description**
- Bad NCS version for package
-* **Format String**
- `"Failed to load NCS package: ~s; requires NCS version ~s"`
-
-
-
-
-
-
-NCS_PACKAGE_CHAL_2FA
-
-NCS_PACKAGE_CHAL_2FA
-
-* **Severity**
- `INFO`
-* **Description**
- Package authentication challenge sent to a user.
-* **Format String**
- `"package authentication challenge sent to ~s from ~s with ~s"`
-
-
-
-
-
-
-NCS_PACKAGE_CHAL_FAIL
-
-NCS_PACKAGE_CHAL_FAIL
-
-* **Severity**
- `INFO`
-* **Description**
- Package authentication challenge failed.
-* **Format String**
- `"package authentication challenge using ~s failed via ~s from ~s with ~s: ~s"`
-
-
-
-
-
-
-NCS_PACKAGE_CIRCULAR_DEPENDENCY
-
-NCS_PACKAGE_CIRCULAR_DEPENDENCY
-
-* **Severity**
- `CRIT`
-* **Description**
- Circular NCS package dependency
-* **Format String**
- `"Failed to load NCS package: ~s; circular dependency found"`
-
-
-
-
-
-
-NCS_PACKAGE_COPYING
-
-NCS_PACKAGE_COPYING
-
-* **Severity**
- `DEBUG`
-* **Description**
- A package is copied from the load path to private directory
-* **Format String**
- `"Copying NCS package from ~s to ~s"`
-
-
-
-
-
-
-NCS_PACKAGE_DUPLICATE
-
-NCS_PACKAGE_DUPLICATE
-
-* **Severity**
- `CRIT`
-* **Description**
- Duplicate package found
-* **Format String**
- `"Failed to load duplicate NCS package ~s: (~s)"`
-
-
-
-
-
-
-NCS_PACKAGE_STATUS_CHANGE
-
-NCS_PACKAGE_STATUS_CHANGE
-
-* **Severity**
- `DEBUG`
-* **Description**
- Status changed for the given package.
-* **Format String**
- `"package '~s' status changed to '~s'."`
-
-
-
-
-
-
-NCS_PACKAGE_SYNTAX_ERROR
-
-NCS_PACKAGE_SYNTAX_ERROR
-
-* **Severity**
- `CRIT`
-* **Description**
- Syntax error in package file
-* **Format String**
- `"Failed to load NCS package: ~s; syntax error in package file"`
-
-
-
-
-
-
-NCS_PACKAGE_UPGRADE_ABORTED
-
-NCS_PACKAGE_UPGRADE_ABORTED
-
-* **Severity**
- `CRIT`
-* **Description**
- The CDB upgrade was aborted implying that CDB is untouched. However the package state is changed
-* **Format String**
- `"NCS package upgrade failed with reason '~s'"`
-
-
-
-
-
-
-NCS_PACKAGE_UPGRADE_UNSAFE
-
-NCS_PACKAGE_UPGRADE_UNSAFE
-
-* **Severity**
- `CRIT`
-* **Description**
- Package upgrade has been aborted due to warnings.
-* **Format String**
- `"NCS package upgrade has been aborted due to warnings:\n~s"`
-
-
-
-
-
-
-NCS_PYTHON_VM_FAIL
-
-NCS_PYTHON_VM_FAIL
-
-* **Severity**
- `ERR`
-* **Description**
- The NCS Python VM failure/timeout
-* **Format String**
- `"The NCS Python VM ~s"`
-
-
-
-
-
-
-NCS_PYTHON_VM_START
-
-NCS_PYTHON_VM_START
-
-* **Severity**
- `INFO`
-* **Description**
- Starting the named NCS Python VM
-* **Format String**
- `"Starting the NCS Python VM ~s"`
-
-
-
-
-
-
-NCS_PYTHON_VM_START_UPGRADE
-
-NCS_PYTHON_VM_START_UPGRADE
-
-* **Severity**
- `INFO`
-* **Description**
- Starting a Python VM to run upgrade code
-* **Format String**
- `"Starting upgrade of NCS Python package ~s"`
-
-
-
-
-
-
-NCS_SERVICE_OUT_OF_SYNC
-
-NCS_SERVICE_OUT_OF_SYNC
-
-* **Severity**
- `INFO`
-* **Description**
- A check-sync action reported out-of-sync for a service
-* **Format String**
- `"NCS service-out-of-sync Service '~s' Info '~s'"`
-
-
-
-
-
-
-NCS_SET_PLATFORM_DATA_ERROR
-
-NCS_SET_PLATFORM_DATA_ERROR
-
-* **Severity**
- `ERR`
-* **Description**
- The device failed to set the platform operational data at connect
-* **Format String**
- `"NCS Device '~s' failed to set platform data Info '~s'"`
-
-
-
-
-
-
-NCS_SMART_LICENSING_ENTITLEMENT_NOTIFICATION
-
-NCS_SMART_LICENSING_ENTITLEMENT_NOTIFICATION
-
-* **Severity**
- `INFO`
-* **Description**
- Smart Licensing Entitlement Notification
-* **Format String**
- `"Smart Licensing Entitlement Notification: ~s"`
-
-
-
-
-
-
-NCS_SMART_LICENSING_EVALUATION_COUNTDOWN
-
-NCS_SMART_LICENSING_EVALUATION_COUNTDOWN
-
-* **Severity**
- `INFO`
-* **Description**
- Smart Licensing evaluation time remaining
-* **Format String**
- `"Smart Licensing evaluation time remaining: ~s"`
-
-
-
-
-
-
-NCS_SMART_LICENSING_FAIL
-
-NCS_SMART_LICENSING_FAIL
-
-* **Severity**
- `INFO`
-* **Description**
- The NCS Smart Licensing Java VM failure/timeout
-* **Format String**
- `"The NCS Smart Licensing Java VM ~s"`
-
-
-
-
-
-
-NCS_SMART_LICENSING_GLOBAL_NOTIFICATION
-
-NCS_SMART_LICENSING_GLOBAL_NOTIFICATION
-
-* **Severity**
- `INFO`
-* **Description**
- Smart Licensing Global Notification
-* **Format String**
- `"Smart Licensing Global Notification: ~s"`
-
-
-
-
-
-
-NCS_SMART_LICENSING_START
-
-NCS_SMART_LICENSING_START
-
-* **Severity**
- `INFO`
-* **Description**
- Starting the NCS Smart Licensing Java VM
-* **Format String**
- `"Starting the NCS Smart Licensing Java VM"`
-
-
-
-
-
-
-NCS_SNMPM_START
-
-NCS_SNMPM_START
-
-* **Severity**
- `INFO`
-* **Description**
- Starting the NCS SNMP manager component
-* **Format String**
- `"Starting the NCS SNMP manager component"`
-
-
-
-
-
-
-NCS_SNMPM_STOP
-
-NCS_SNMPM_STOP
-
-* **Severity**
- `INFO`
-* **Description**
- The NCS SNMP manager component has been stopped
-* **Format String**
- `"The NCS SNMP manager component has been stopped"`
-
-
-
-
-
-
-NCS_SNMP_INIT_ERR
-
-NCS_SNMP_INIT_ERR
-
-* **Severity**
- `INFO`
-* **Description**
- Failed to locate snmp_init.xml in loadpath
-* **Format String**
- `"Failed to locate snmp_init.xml in loadpath ~s"`
-
-
-
-
-
-
-NCS_UPGRADE_ABORTED_INTERNAL
-
-NCS_UPGRADE_ABORTED_INTERNAL
-
-* **Severity**
- `CRIT`
-* **Description**
- The CDB upgrade was aborted due to some internal error. CDB is left untouched
-* **Format String**
- `"NCS upgrade failed with reason '~s'"`
-
-
-
-
-
-
-BAD_LOCAL_PASS
-
-BAD_LOCAL_PASS
-
-* **Severity**
- `INFO`
-* **Description**
- A locally configured user provided a bad password.
-* **Format String**
- `"Provided bad password"`
-
-
-
-
-
-
-EXT_LOGIN
-
-EXT_LOGIN
-
-* **Severity**
- `INFO`
-* **Description**
- An externally authenticated user logged in.
-* **Format String**
- `"Logged in over ~s using externalauth, member of groups: ~s~s"`
-
-
-
-
-
-
-EXT_NO_LOGIN
-
-EXT_NO_LOGIN
-
-* **Severity**
- `INFO`
-* **Description**
- External authentication failed for a user.
-* **Format String**
- `"failed to login using externalauth: ~s"`
-
-
-
-
-
-
-NO_SUCH_LOCAL_USER
-
-NO_SUCH_LOCAL_USER
-
-* **Severity**
- `INFO`
-* **Description**
- A non existing local user tried to login.
-* **Format String**
- `"no such local user"`
-
-
-
-
-
-
-PAM_LOGIN_FAILED
-
-PAM_LOGIN_FAILED
-
-* **Severity**
- `INFO`
-* **Description**
- A user failed to login through PAM.
-* **Format String**
- `"pam phase ~s failed to login through PAM: ~s"`
-
-
-
-
-
-
-PAM_NO_LOGIN
-
-PAM_NO_LOGIN
-
-* **Severity**
- `INFO`
-* **Description**
- A user failed to login through PAM
-* **Format String**
- `"failed to login through PAM: ~s"`
-
-
-
-
-
-
-SSH_LOGIN
-
-SSH_LOGIN
-
-* **Severity**
- `INFO`
-* **Description**
- A user logged into ConfD's builtin ssh server.
-* **Format String**
- `"logged in over ssh from ~s with authmeth:~s"`
-
-
-
-
-
-
-SSH_LOGOUT
-
-SSH_LOGOUT
-
-* **Severity**
- `INFO`
-* **Description**
- A user was logged out from ConfD's builtin ssh server.
-* **Format String**
- `"Logged out ssh <~s> user"`
-
-
-
-
-
-
-SSH_NO_LOGIN
-
-SSH_NO_LOGIN
-
-* **Severity**
- `INFO`
-* **Description**
- A user failed to login to ConfD's builtin SSH server.
-* **Format String**
- `"Failed to login over ssh: ~s"`
-
-
-
-
-
-
-WEB_LOGIN
-
-WEB_LOGIN
-
-* **Severity**
- `INFO`
-* **Description**
- A user logged in through the WebUI.
-* **Format String**
- `"logged in through Web UI from ~s"`
-
-
-
-
-
-
-WEB_LOGOUT
-
-WEB_LOGOUT
-
-* **Severity**
- `INFO`
-* **Description**
- A Web UI user logged out.
-* **Format String**
- `"logged out from Web UI"`
-
-
-
diff --git a/best-practices/network-automation-delivery-model.md b/best-practices/network-automation-delivery-model.md
new file mode 100644
index 00000000..67afc7b1
--- /dev/null
+++ b/best-practices/network-automation-delivery-model.md
@@ -0,0 +1,10 @@
+---
+description: Learn how to build an automation practice.
+icon: space-awesome
+---
+
+# Network Automation Delivery Model
+
+Visit the link below to learn more.
+
+{% embed url="https://developer.cisco.com/docs/network-automation-delivery-model/network-automation-delivery-model/" %}
diff --git a/best-practices/nso-on-kubernetes.md b/best-practices/nso-on-kubernetes.md
new file mode 100644
index 00000000..fa10e65e
--- /dev/null
+++ b/best-practices/nso-on-kubernetes.md
@@ -0,0 +1,120 @@
+---
+icon: spider-web
+description: Best practice guidelines for deploying NSO on Kubernetes.
+---
+
+# NSO on Kubernetes
+
+Deploying Cisco NSO on Kubernetes offers numerous advantages, including consistent deployments, self-healing capabilities, and better version control. This document outlines best practices for deploying NSO on Kubernetes to ensure optimal performance, security, and maintainability.
+
+{% hint style="success" %}
+See also the documentation for the Cisco-provided [Containerized NSO](https://cisco-tailf.gitbook.io/nso-docs/guides/administration/installation-and-deployment/containerized-nso) images.
+{% endhint %}
+
+## Prerequisites
+
+### Kubernetes Cluster
+
+* **Version Compatibility**: Ensure that your Kubernetes cluster is within the three most recent minor releases to maintain official support.
+* **Persistent Storage**: Install a Container Storage Interface (CSI) if not using a managed Kubernetes service. Managed services like EKS on AWS or GKE on GCP handle this automatically.
+* **Networking**: Install a Container Network Interface (CNI) such as Cilium, Calico, Flannel, or Weave. Additionally, configure an ingress controller or load balancer as needed to expose services.
+* **TLS Certificates**: Use TLS certificates for HTTPS access and to secure communication between different NSO instances. This is crucial for securing data transmission.
+
+## Deployment Architecture
+
+### Namespace Design
+
+* **Isolation**: Run NSO in its own namespace to isolate its resources (pods, services, secrets, and so on.) from other applications and services in the cluster. This logical separation helps manage resources and apply specific RBAC policies.
+
+### Pod Design
+
+* **Stateful Pods**: Use StatefulSets for production deployments to ensure that each NSO pod retains its data across restarts by mounting the same PersistentVolume. StatefulSets also provide a stable network identity for each pod.
+* **Data Persistence**: Attach persistent volumes to NSO pods to ensure data persistence. Avoid using hostPath volumes in production due to security risks.
+
+### Service Design
+
+* **Service Types**:
+ * **ClusterIP**: Use for internal communications between NSO instances or other Kubernetes resources.
+ * **NodePort**: Use for testing purposes only, as it exposes pods over the address of a Kubernetes node.
+ * **LoadBalancer**: Use for external access, such as exposing SSH/NETCONF ports.
+* **Ingress Controllers**: Use Ingress for managing external access to HTTP or HTTPS traffic. For more advanced routing capabilities, consider using the Gateway API.
+
+## Storage Design
+
+### Volume Management
+
+* **Persistent Volumes**: Use PersistentVolumeClaims to manage storage and ensure that critical directories like NSO running directory, packages directory, and logs directory persist through restarts.
+* **NSO Directories**: Mount necessary directories, such as the NSO running directory, packages directory, and logs directory to persistent volumes.
+* **Avoid HostPath**: Refrain from using hostPath volumes in production environments, as they expose NSO data to the host system and add maintenance overhead.
+
+## Deployment Strategies
+
+### YAML Manifests
+
+* **Version Control**: Define Kubernetes objects using YAML manifests and manage them via version control. This ensures consistent deployments and easier rollback capabilities.
+* **ConfigMaps and Secrets**: Use ConfigMaps for non-sensitive configuration files and Secrets for sensitive data like Docker registry credentials. ConfigMaps are used to manage NSO configuration files, while Secrets can store sensitive information such as passwords and API keys. In NSO, the sensitive data that should go into Secrets is, for example, encryption keys for the CDB.
+
+### Helm Charts
+
+* **Simplified Deployment**: Use Helm charts for packaging YAML manifests, simplifying the deployment process. Manage deployment parameters through a `values.yaml` file.
+* **Custom Configuration**: Expose runtime parameters, service ports, URLs, and other configurations via Helm templates. Helm charts allow for more dynamic and reusable configurations.
+
+## Security Considerations
+
+### Running as Non-Root
+
+* **SecurityContext**: Limit the Linux capabilities that are allowed for the NSO container and avoid running containers as the root user. This can be done by defining a SecurityContext in the Pod specification.
+* **Custom Dockerfile**: Create a Dockerfile to add a non-root user and adjust folder permissions, ensuring NSO runs as a dedicated user. This can help in adhering to the principle of least privilege.
+
+### Network Policies
+
+* **Ingress and Egress Control**: Implement network policies to restrict access to NSO instances and managed devices. Limit the communication to trusted IP ranges and namespaces.
+* **Service Accounts**: Create dedicated service accounts for NSO pods to minimize permissions and reduce security risks. This ensures that each service account only has the permissions it needs for its tasks.
+
+## Monitoring & Logging
+
+### Observability Exporter
+
+* **Setup**: Transform Docker Compose files to Kubernetes manifests using tools like Kompose. Deploy the observability exporter to export data in industry-standard formats such as OpenTelemetry.
+* **Container Probes**: Implement readiness probes to monitor the health and readiness of NSO containers. Use HTTP checks to ensure that the NSO API is operational. Probes can help in ensuring that the application is functioning correctly and can handle traffic.
+
+## Scaling & Performance Optimization
+
+### Resource Requests & Limits
+
+* **Resource Management**: Define resource requests and limits for NSO pods to ensure appropriate CPU and memory allocation. This helps maintain cluster stability and performance by preventing any single pod from using excessive resources.
+
+### Affinity & Anti-Affinity
+
+* **Pod Distribution**: Use affinity and anti-affinity rules to ensure optimal distribution of NSO pods across worker nodes. This helps in achieving high availability and resilience by ensuring that pods are evenly distributed across nodes.
+
+## High Availability & Resiliency
+
+### Raft HA
+
+* **Setup**: Configure a three-node Raft cluster for high availability. Ensure that each node has a unique pod and network identity, as well as its own PersistentVolume and PersistentVolumeClaim.
+* **Annotations**: Use annotations to direct requests to the primary NSO instance. Implement sidecar containers to periodically check and update the Raft HA status. This ensures that the primary instance is always up and running.
+
+## Backup & Disaster Recovery
+
+### NSO Backup
+
+* **Automated Backups**: Use Kubernetes CronJobs to automate regular NSO backups. Store the backups securely and periodically verify them.
+* **Disaster Recovery**: Ensure that NSO backups are stored in a secure location and can be restored in case of cluster failure. Use temporary container instances to restore backups without running NSO.
+
+## Upgrade & Maintenance
+
+### Upgrading NSO
+
+* **Persistent Storage**: Ensure that the NSO running directory uses persistent storage to maintain data integrity during upgrades.
+* **Testing**: Test upgrades on a dummy instance before applying them to production. Clone the existing PVC and spin up a new NSO instance for testing.
+* **Rolling Upgrades**: Update the container image version in YAML manifests or Helm charts. Delete the old NSO pods to allow Kubernetes to deploy the new ones. This minimizes downtime and ensures a smooth transition to the new version.
+
+### Cluster Maintenance
+
+* **Rolling Upgrades**: Perform rolling node upgrades to minimize downtime and ensure high availability. Ensure the compatibility with Kubernetes API and resource definitions before upgrading.
+* **Node Draining**: Drain and cordon nodes to safely migrate NSO instances during maintenance. This helps in ensuring that the cluster remains functional during maintenance activities.
+
+## Conclusion
+
+By adhering to these best practices, you can ensure a robust, secure, and efficient deployment of Cisco NSO on Kubernetes. These guidelines help maintain operational stability, improve performance, and enhance the overall manageability of your Kubernetes deployments. Implementing these practices will help in achieving a reliable and scalable Kubernetes environment for NSO.
diff --git a/best-practices/scaling-and-performance-optimization.md b/best-practices/scaling-and-performance-optimization.md
new file mode 100644
index 00000000..8c98ba05
--- /dev/null
+++ b/best-practices/scaling-and-performance-optimization.md
@@ -0,0 +1,10 @@
+---
+description: Optimize NSO for scaling and performance.
+icon: chart-mixed
+---
+
+# Scaling and Performance Optimization
+
+Visit the link below to learn more.
+
+{% embed url="https://cisco-tailf.gitbook.io/nso-docs/guides/development/advanced-development/scaling-and-performance-optimization" %}
diff --git a/developer-reference/erlang-api-reference.md b/developer-reference/erlang-api-reference.md
deleted file mode 100644
index 22d2714c..00000000
--- a/developer-reference/erlang-api-reference.md
+++ /dev/null
@@ -1,10 +0,0 @@
----
-description: NSO Erlang API Reference.
-icon: square-e
----
-
-# Erlang API Reference
-
-Visit the link below to learn more.
-
-{% embed url="https://developer.cisco.com/docs/nso-api-6.5/nso-erlang-api-api-overview/" %}
diff --git a/developer-reference/erlang/README.md b/developer-reference/erlang/README.md
deleted file mode 100644
index 8b06aad4..00000000
--- a/developer-reference/erlang/README.md
+++ /dev/null
@@ -1,57 +0,0 @@
----
-icon: square-e
----
-
-# Erlang API Reference
-
-The `econfd` application is the Erlang API towards the ConfD daemon. It is delivered as an OTP application, which must be started by the system which wishes to interface to ConfD. As an alternative, the supervisor `econfd_sup` can be started directly.
-
-This is the equivalent of libconfd.so for C programmers.
-
-The interface towards ConfD is a socket based IPC interface, thus this application, econfd, executes in a different address space than ConfD itself. The protocol between econfd and ConfD is almost the same regardless of whether econfd (erlang API) or libconfd.so (C API) is used.
-
-Thus the architecture is according to the following picture:
-
-
Architecture
-
-which illustrates the overall architecture from an OTP perspective.
-
-The econfd OTP application consists of the following parts.
-
-### Data provider API
-
-Module [econfd](econfd.md)
-
-This API consists of a gen\_server (econfd\_daemon) which needs to get a number of callback functions installed. This API is used when we need to implement an external data provider. Typically statistics data which is part of the data model, but not part of the actual configuration.
-
-### CDB API
-
-Module [econfd\_cdb](econfd_cdb.md)
-
-This API is the CDB database client API. It is used to read (and write) into CDB.
-
-### MAAPI API
-
-Module [econfd\_maapi](econfd_maapi.md)
-
-This API is used when we wish to implement proprietary agents. It is also used by user defined validation code which needs to attach to the executing transaction and read the "not yet committed" data in the currently executing transaction.
-
-### Event Notifications API
-
-Module [econfd\_notif](econfd_notif.md)
-
-This API is used when we wish to receive notification events from ConfD describing certain events.
-
-### HA API
-
-Module [econfd\_ha](econfd_ha.md)
-
-This API is used by an optional surrounding HA (High availability) framework which needs to notify ConfD about various HA related events.
-
-### Schema API
-
-Module [econfd\_schema](econfd_schema.md)
-
-This API is used to access schema information (i.e. the internal representation of YANG modules), making it possible to navigate the schema trees and obtain and use structure and type information.
-
-In order to use the econfd API, familiarity with the corresponding C API is necessary. This edoc documentation is fairly thin. In practice all types are documented and in order to figure out the semantics for a certain function, it is necessary to read the corresponding man page for the equivalent C function.
diff --git a/developer-reference/erlang/econfd.md b/developer-reference/erlang/econfd.md
deleted file mode 100644
index 924043c0..00000000
--- a/developer-reference/erlang/econfd.md
+++ /dev/null
@@ -1,1706 +0,0 @@
-# Module econfd
-
-An Erlang interface equivalent to the confd_lib_dp C-API (documented in confd_lib_dp(3)).
-
-This module is used to connect to ConfD and provide callback functions so that ConfD can populate its northbound agent interfaces with external data. Thus the library consists of a number of API functions whose purpose is to install different callback functions at different points in the XML tree which is the representation of the device configuration. Read more about callpoints in the ConfD User Guide.
-
-
-## Types
-
-### address/0
-
-```erlang
--type address() :: #econfd_conn_ip{} | #econfd_conn_local{}.
-```
-
-### cb_action/0
-
-```erlang
--type cb_action() ::
- cb_action_act() | cb_action_cmd() | cb_action_init().
-```
-
-Related types: [cb\_action\_act()](#cb_action_act-0), [cb\_action\_cmd()](#cb_action_cmd-0), [cb\_action\_init()](#cb_action_init-0)
-
-It is the callback for #confd_action_cb.action
-
-
-### cb_action_act/0
-
-```erlang
--type cb_action_act() ::
- fun((U :: #confd_user_info{},
- Name :: qtag(),
- KP :: ikeypath(),
- [Param :: tagval()]) ->
- ok |
- {ok, [Result :: tagval()]} |
- {error, error_reason()}).
-```
-
-Related types: [error\_reason()](#error_reason-0), [ikeypath()](#ikeypath-0), [qtag()](#qtag-0), [tagval()](#tagval-0)
-
-It is the callback for #confd_action_cb.action when invoked as an action request. If a new worker socket was setup in the cb_action_init that socket will be closed when the callback returns.
-
-
-### cb_action_cmd/0
-
-```erlang
--type cb_action_cmd() ::
- fun((U :: #confd_user_info{},
- Name :: binary(),
- Path :: binary(),
- [Arg :: binary()]) ->
- ok |
- {ok, [Result :: binary()]} |
- {error, error_reason()}).
-```
-
-Related types: [error\_reason()](#error_reason-0)
-
-It is the callback for #confd_action_cb.action when invoked as a CLI command callback.
-
-
-### cb_action_init/0
-
-```erlang
--type cb_action_init() ::
- fun((U :: #confd_user_info{}, EconfdOpaque :: term()) ->
- ok |
- {ok, #confd_user_info{}} |
- {error, error_reason()}).
-```
-
-Related types: [error\_reason()](#error_reason-0)
-
-It is the callback for #confd_action_cb.init If the action should be done in a separate socket, the call to econfd:new_worker_socket/3 must be done here. The worker and its socket will be closed after the cb_action() returns.
-
-
-### cb_authentication/0
-
-```erlang
--type cb_authentication() ::
- fun((#confd_authentication_ctx{}) ->
- ok | error | {error, binary()}).
-```
-
-The callback for #confd_authentication_cb.auth
-
-
-### cb_candidate_commit/0
-
-```erlang
--type cb_candidate_commit() ::
- fun((#confd_db_ctx{}, Timeout :: integer()) ->
- ok | {error, error_reason()}).
-```
-
-Related types: [error\_reason()](#error_reason-0)
-
-The callback for #confd_db_cbs.candidate_commit
-
-
-### cb_completion_action/0
-
-```erlang
--type cb_completion_action() ::
- fun((U :: #confd_user_info{},
- CliStyle :: integer(),
- Token :: binary(),
- CompletionChar :: integer(),
- IKP :: ikeypath(),
- CmdPath :: binary(),
- Id :: binary(),
- TP :: term(),
- Extra :: term()) ->
- [string() |
- {info, string()} |
- {desc, string()} |
- default]).
-```
-
-Related types: [ikeypath()](#ikeypath-0)
-
-It is the callback for #confd_action_cb.action when invoked as a CLI command completion.
-
-
-### cb_create/0
-
-```erlang
--type cb_create() ::
- fun((T :: confd_trans_ctx(), KP :: ikeypath()) ->
- ok |
- {ok, confd_trans_ctx()} |
- {error, error_reason()} |
- delayed_response).
-```
-
-Related types: [confd\_trans\_ctx()](#confd_trans_ctx-0), [error\_reason()](#error_reason-0), [ikeypath()](#ikeypath-0)
-
-It is the callback for #confd_data_cbs.create. Only used when we use external database config data, e.g. not for statistics.
-
-
-### cb_ctx/0
-
-```erlang
--type cb_ctx() ::
- fun((confd_trans_ctx()) ->
- ok | {ok, confd_trans_ctx()} | {error, error_reason()}).
-```
-
-Related types: [confd\_trans\_ctx()](#confd_trans_ctx-0), [error\_reason()](#error_reason-0)
-
-The callback for #confd_trans_validate_cbs.init and #confd_trans_cbs.init as well as several other callbacks in #confd_trans_cbs\{\}
-
-
-### cb_db/0
-
-```erlang
--type cb_db() ::
- fun((#confd_db_ctx{}, DbName :: integer()) ->
- ok | {error, error_reason()}).
-```
-
-Related types: [error\_reason()](#error_reason-0)
-
-The callback for #confd_db_cbs.lock, #confd_db_cbs.unlock, and #confd_db_cbs.delete_config
-
-
-### cb_exists_optional/0
-
-```erlang
--type cb_exists_optional() ::
- fun((T :: confd_trans_ctx(), KP :: ikeypath()) ->
- {ok, cb_exists_optional_reply()} |
- {ok, cb_exists_optional_reply(), confd_trans_ctx()} |
- {error, error_reason()} |
- delayed_response).
-```
-
-Related types: [cb\_exists\_optional\_reply()](#cb_exists_optional_reply-0), [confd\_trans\_ctx()](#confd_trans_ctx-0), [error\_reason()](#error_reason-0), [ikeypath()](#ikeypath-0)
-
-This is the callback for #confd_data_cbs.exists_optional. The exists_optional callback must be present if our YANG model has presence containers or leafs of type empty outside of unions.
-
-If type empty leafs are in unions, then cb_get_elem() is used instead.
-
-
-### cb_exists_optional_reply/0
-
-```erlang
--type cb_exists_optional_reply() :: boolean().
-```
-
-### cb_find_next/0
-
-```erlang
--type cb_find_next() ::
- fun((T :: confd_trans_ctx(),
- KP :: ikeypath(),
- FindNextType :: integer(),
- PrevKey :: key()) ->
- {ok, cb_find_next_reply()} |
- {ok, cb_find_next_reply(), confd_trans_ctx()} |
- {error, error_reason()} |
- delayed_response).
-```
-
-Related types: [cb\_find\_next\_reply()](#cb_find_next_reply-0), [confd\_trans\_ctx()](#confd_trans_ctx-0), [error\_reason()](#error_reason-0), [ikeypath()](#ikeypath-0), [key()](#key-0)
-
-This is the callback for #confd_data_cbs.find_next.
-
-
-### cb_find_next_object/0
-
-```erlang
--type cb_find_next_object() ::
- fun((T :: confd_trans_ctx(),
- KP :: ikeypath(),
- FindNextType :: integer(),
- PrevKey :: key()) ->
- {ok, cb_find_next_object_reply()} |
- {ok, cb_find_next_object_reply(), confd_trans_ctx()} |
- {ok, objects(), TimeoutMillisecs :: integer()} |
- {ok,
- objects(),
- TimeoutMillisecs :: integer(),
- confd_trans_ctx()} |
- {error, error_reason()} |
- delayed_response).
-```
-
-Related types: [cb\_find\_next\_object\_reply()](#cb_find_next_object_reply-0), [confd\_trans\_ctx()](#confd_trans_ctx-0), [error\_reason()](#error_reason-0), [ikeypath()](#ikeypath-0), [key()](#key-0), [objects()](#objects-0)
-
-Optional callback which combines the functionality of find_next() and get_object(), and adds the possibility to return multiple objects. It is the callback for #confd_data_cbs.find_next_object. For a detailed description of the two forms of the value list, please refer to the "Value Array" and "Tag Value Array" specifications, respectively, in the XML STRUCTURES section of the confd_types(3) manual page.
-
-
-### cb_find_next_object_reply/0
-
-```erlang
--type cb_find_next_object_reply() ::
- vals_next() | tag_val_object_next() | {false, undefined}.
-```
-
-Related types: [tag\_val\_object\_next()](#tag_val_object_next-0), [vals\_next()](#vals_next-0)
-
-### cb_find_next_reply/0
-
-```erlang
--type cb_find_next_reply() ::
- {Key :: key(), Next :: term()} | {false, undefined}.
-```
-
-Related types: [key()](#key-0)
-
-### cb_get_attrs/0
-
-```erlang
--type cb_get_attrs() ::
- fun((T :: confd_trans_ctx(),
- KP :: ikeypath(),
- [Attr :: integer()]) ->
- {ok, cb_get_attrs_reply()} |
- {ok, cb_get_attrs_reply(), confd_trans_ctx()} |
- {error, error_reason()} |
- delayed_response).
-```
-
-Related types: [cb\_get\_attrs\_reply()](#cb_get_attrs_reply-0), [confd\_trans\_ctx()](#confd_trans_ctx-0), [error\_reason()](#error_reason-0), [ikeypath()](#ikeypath-0)
-
-This is the callback for #confd_data_cbs.get_attrs.
-
-
-### cb_get_attrs_reply/0
-
-```erlang
--type cb_get_attrs_reply() ::
- [{Attr :: integer(), V :: value()}] | not_found.
-```
-
-Related types: [value()](#value-0)
-
-### cb_get_case/0
-
-```erlang
--type cb_get_case() ::
- fun((T :: confd_trans_ctx(),
- KP :: ikeypath(),
- ChoicePath :: [qtag()]) ->
- {ok, cb_get_case_reply()} |
- {ok, cb_get_case_reply(), confd_trans_ctx()} |
- {error, error_reason()} |
- delayed_response).
-```
-
-Related types: [cb\_get\_case\_reply()](#cb_get_case_reply-0), [confd\_trans\_ctx()](#confd_trans_ctx-0), [error\_reason()](#error_reason-0), [ikeypath()](#ikeypath-0), [qtag()](#qtag-0)
-
-This is the callback for #confd_data_cbs.get_case. Only used when we use 'choice' in the data model. Normally ChoicePath is just a single element with the name of the choice, but if we have nested choices without intermediate data nodes, it will be similar to an ikeypath, i.e. a reversed list of choice and case names giving the path through the nested choices.
-
-
-### cb_get_case_reply/0
-
-```erlang
--type cb_get_case_reply() :: Case :: qtag() | not_found.
-```
-
-Related types: [qtag()](#qtag-0)
-
-### cb_get_elem/0
-
-```erlang
--type cb_get_elem() ::
- fun((T :: confd_trans_ctx(), KP :: ikeypath()) ->
- {ok, cb_get_elem_reply()} |
- {ok, cb_get_elem_reply(), confd_trans_ctx()} |
- {error, error_reason()} |
- delayed_response).
-```
-
-Related types: [cb\_get\_elem\_reply()](#cb_get_elem_reply-0), [confd\_trans\_ctx()](#confd_trans_ctx-0), [error\_reason()](#error_reason-0), [ikeypath()](#ikeypath-0)
-
-This is the callback for #confd_data_cbs.get_elem.
-
-
-### cb_get_elem_reply/0
-
-```erlang
--type cb_get_elem_reply() :: value() | not_found.
-```
-
-Related types: [value()](#value-0)
-
-### cb_get_log_times/0
-
-```erlang
--type cb_get_log_times() ::
- fun((#confd_notification_ctx{}) ->
- {ok,
- {Created :: datetime(),
- Aged :: datetime() | not_found}} |
- {error, error_reason()}).
-```
-
-Related types: [datetime()](#datetime-0), [error\_reason()](#error_reason-0)
-
-The callback for #confd_notification_stream_cbs.get_log_times
-
-
-### cb_get_next/0
-
-```erlang
--type cb_get_next() ::
- fun((T :: confd_trans_ctx(), KP :: ikeypath(), Prev :: term()) ->
- {ok, cb_get_next_reply()} |
- {ok, cb_get_next_reply(), confd_trans_ctx()} |
- {error, error_reason()} |
- delayed_response).
-```
-
-Related types: [cb\_get\_next\_reply()](#cb_get_next_reply-0), [confd\_trans\_ctx()](#confd_trans_ctx-0), [error\_reason()](#error_reason-0), [ikeypath()](#ikeypath-0)
-
-This is the callback for #confd_data_cbs.get_next. Prev is the integer -1 on the first call.
-
-
-### cb_get_next_object/0
-
-```erlang
--type cb_get_next_object() ::
- fun((T :: confd_trans_ctx(), KP :: ikeypath(), Prev :: term()) ->
- {ok, cb_get_next_object_reply()} |
- {ok, cb_get_next_object_reply(), confd_trans_ctx()} |
- {ok, objects(), TimeoutMillisecs :: integer()} |
- {ok,
- objects(),
- TimeoutMillisecs :: integer(),
- confd_trans_ctx()} |
- {error, error_reason()} |
- delayed_response).
-```
-
-Related types: [cb\_get\_next\_object\_reply()](#cb_get_next_object_reply-0), [confd\_trans\_ctx()](#confd_trans_ctx-0), [error\_reason()](#error_reason-0), [ikeypath()](#ikeypath-0), [objects()](#objects-0)
-
-Optional callback which combines the functionality of get_next() and get_object(), and adds the possibility to return multiple objects. It is the callback for #confd_data_cbs.get_next_object. For a detailed description of the two forms of the value list, please refer to the "Value Array" and "Tag Value Array" specifications, respectively, in the XML STRUCTURES section of the confd_types(3) manual page.
-
-
-### cb_get_next_object_reply/0
-
-```erlang
--type cb_get_next_object_reply() ::
- vals_next() | tag_val_object_next() | {false, undefined}.
-```
-
-Related types: [tag\_val\_object\_next()](#tag_val_object_next-0), [vals\_next()](#vals_next-0)
-
-### cb_get_next_reply/0
-
-```erlang
--type cb_get_next_reply() ::
- {Key :: key(), Next :: term()} | {false, undefined}.
-```
-
-Related types: [key()](#key-0)
-
-### cb_get_object/0
-
-```erlang
--type cb_get_object() ::
- fun((T :: confd_trans_ctx(), KP :: ikeypath()) ->
- {ok, cb_get_object_reply()} |
- {ok, cb_get_object_reply(), confd_trans_ctx()} |
- {error, error_reason()} |
- delayed_response).
-```
-
-Related types: [cb\_get\_object\_reply()](#cb_get_object_reply-0), [confd\_trans\_ctx()](#confd_trans_ctx-0), [error\_reason()](#error_reason-0), [ikeypath()](#ikeypath-0)
-
-Optional callback which is used to return an entire object. It is the callback for #confd_data_cbs.get_object. For a detailed description of the two forms of the value list, please refer to the "Value Array" and "Tag Value Array" specifications, respectively, in the XML STRUCTURES section of the confd_types(3) manual page.
-
-
-### cb_get_object_reply/0
-
-```erlang
--type cb_get_object_reply() :: vals() | tag_val_object() | not_found.
-```
-
-Related types: [tag\_val\_object()](#tag_val_object-0), [vals()](#vals-0)
-
-### cb_lock_partial/0
-
-```erlang
--type cb_lock_partial() ::
- fun((#confd_db_ctx{},
- DbName :: integer(),
- LockId :: integer(),
- [ikeypath()]) ->
- ok | {error, error_reason()}).
-```
-
-Related types: [error\_reason()](#error_reason-0), [ikeypath()](#ikeypath-0)
-
-The callback for #confd_db_cbs.lock_partial
-
-
-### cb_move_after/0
-
-```erlang
--type cb_move_after() ::
- fun((T :: confd_trans_ctx(),
- KP :: ikeypath(),
- PrevKeys :: {value()}) ->
- ok |
- {ok, confd_trans_ctx()} |
- {error, error_reason()} |
- delayed_response).
-```
-
-Related types: [confd\_trans\_ctx()](#confd_trans_ctx-0), [error\_reason()](#error_reason-0), [ikeypath()](#ikeypath-0), [value()](#value-0)
-
-This is the callback for #confd_data_cbs.move_after. PrevKeys == \{\} means that the list entry should become the first one.
-
-
-### cb_num_instances/0
-
-```erlang
--type cb_num_instances() ::
- fun((T :: confd_trans_ctx(), KP :: ikeypath()) ->
- {ok, cb_num_instances_reply()} |
- {ok, cb_num_instances_reply(), confd_trans_ctx()} |
- {error, error_reason()} |
- delayed_response).
-```
-
-Related types: [cb\_num\_instances\_reply()](#cb_num_instances_reply-0), [confd\_trans\_ctx()](#confd_trans_ctx-0), [error\_reason()](#error_reason-0), [ikeypath()](#ikeypath-0)
-
-Optional callback, if it doesn't exist it will be emulated by consecutive calls to get_next(). It is the callback for #confd_data_cbs.num_instances.
-
-
-### cb_num_instances_reply/0
-
-```erlang
--type cb_num_instances_reply() :: integer().
-```
-
-### cb_ok/0
-
-```erlang
--type cb_ok() ::
- fun((confd_trans_ctx()) -> ok | {error, error_reason()}).
-```
-
-Related types: [confd\_trans\_ctx()](#confd_trans_ctx-0), [error\_reason()](#error_reason-0)
-
-The callback for #confd_trans_cbs.finish and #confd_trans_validate_cbs.stop
-
-
-### cb_ok_db/0
-
-```erlang
--type cb_ok_db() ::
- fun((#confd_db_ctx{}) -> ok | {error, error_reason()}).
-```
-
-Related types: [error\_reason()](#error_reason-0)
-
-The callback for #confd_db_cbs.candidate_confirming_commit and several other callbacks in #confd_db_cbs\{\}
-
-
-### cb_remove/0
-
-```erlang
--type cb_remove() ::
- fun((T :: confd_trans_ctx(), KP :: ikeypath()) ->
- ok |
- {ok, confd_trans_ctx()} |
- {error, error_reason()} |
- delayed_response).
-```
-
-Related types: [confd\_trans\_ctx()](#confd_trans_ctx-0), [error\_reason()](#error_reason-0), [ikeypath()](#ikeypath-0)
-
-It is the callback for #confd_data_cbs.remove. Only used when we use external database config data, e.g. not for statistics.
-
-
-### cb_replay/0
-
-```erlang
--type cb_replay() ::
- fun((#confd_notification_ctx{},
- Start :: datetime(),
- Stop :: datetime() | undefined) ->
- ok | {error, error_reason()}).
-```
-
-Related types: [datetime()](#datetime-0), [error\_reason()](#error_reason-0)
-
-The callback for #confd_notification_stream_cbs.replay
-
-
-### cb_set_attr/0
-
-```erlang
--type cb_set_attr() ::
- fun((T :: confd_trans_ctx(),
- KP :: ikeypath(),
- Attr :: integer(),
- cb_set_attr_value()) ->
- ok |
- {ok, confd_trans_ctx()} |
- {error, error_reason()} |
- delayed_response).
-```
-
-Related types: [cb\_set\_attr\_value()](#cb_set_attr_value-0), [confd\_trans\_ctx()](#confd_trans_ctx-0), [error\_reason()](#error_reason-0), [ikeypath()](#ikeypath-0)
-
-This is the callback for #confd_data_cbs.set_attr. Value == undefined means that the attribute should be deleted.
-
-
-### cb_set_attr_value/0
-
-```erlang
--type cb_set_attr_value() :: value() | undefined.
-```
-
-Related types: [value()](#value-0)
-
-### cb_set_case/0
-
-```erlang
--type cb_set_case() ::
- fun((T :: confd_trans_ctx(),
- KP :: ikeypath(),
- ChoicePath :: [qtag()],
- Case :: qtag() | '$none') ->
- ok |
- {ok, confd_trans_ctx()} |
- {error, error_reason()} |
- delayed_response).
-```
-
-Related types: [confd\_trans\_ctx()](#confd_trans_ctx-0), [error\_reason()](#error_reason-0), [ikeypath()](#ikeypath-0), [qtag()](#qtag-0)
-
-This is the callback for #confd_data_cbs.set_case. Only used when we use 'choice' in the data model. Case == '$none' means that no case is chosen (i.e. all have been deleted). Normally ChoicePath is just a single element with the name of the choice, but if we have nested choices without intermediate data nodes, it will be similar to an ikeypath, i.e. a reversed list of choice and case names giving the path through the nested choices.
-
-
-### cb_set_elem/0
-
-```erlang
--type cb_set_elem() ::
- fun((T :: confd_trans_ctx(),
- KP :: ikeypath(),
- Value :: value()) ->
- ok |
- {ok, confd_trans_ctx()} |
- {error, error_reason()} |
- delayed_response).
-```
-
-Related types: [confd\_trans\_ctx()](#confd_trans_ctx-0), [error\_reason()](#error_reason-0), [ikeypath()](#ikeypath-0), [value()](#value-0)
-
-It is the callback for #confd_data_cbs.set_elem. Only used when we use external database config data, e.g. not for statistics.
-
-
-### cb_str_to_val/0
-
-```erlang
--type cb_str_to_val() ::
- fun((TypeCtx :: term(), String :: string()) ->
- {ok, Value :: value()} |
- error |
- {error, Reason :: binary()} |
- none()).
-```
-
-Related types: [value()](#value-0)
-
-The callback for #confd_type_cbs.str_to_val. The TypeCtx argument is currently unused (passed as 'undefined'). The function may fail - this is equivalent to returning 'error'.
-
-
-### cb_trans_lock/0
-
-```erlang
--type cb_trans_lock() ::
- fun((confd_trans_ctx()) ->
- ok |
- {ok, confd_trans_ctx()} |
- {error, error_reason()} |
- confd_already_locked).
-```
-
-Related types: [confd\_trans\_ctx()](#confd_trans_ctx-0), [error\_reason()](#error_reason-0)
-
-The callback for #confd_trans_cbs.trans_lock. The confd_already_locked return value is equivalent to \{error, #confd_error\{ code = in_use \}\}.
-
-
-### cb_unlock_partial/0
-
-```erlang
--type cb_unlock_partial() ::
- fun((#confd_db_ctx{},
- DbName :: integer(),
- LockId :: integer()) ->
- ok | {error, error_reason()}).
-```
-
-Related types: [error\_reason()](#error_reason-0)
-
-The callback for #confd_db_cbs.unlock_partial
-
-
-### cb_val_to_str/0
-
-```erlang
--type cb_val_to_str() ::
- fun((TypeCtx :: term(), Value :: value()) ->
- {ok, String :: string()} |
- error |
- {error, Reason :: binary()} |
- none()).
-```
-
-Related types: [value()](#value-0)
-
-The callback for #confd_type_cbs.val_to_str. The TypeCtx argument is currently unused (passed as 'undefined'). The function may fail - this is equivalent to returning 'error'.
-
-
-### cb_validate/0
-
-```erlang
--type cb_validate() ::
- fun((T :: confd_trans_ctx(),
- KP :: ikeypath(),
- Newval :: value()) ->
- ok |
- {ok, confd_trans_ctx()} |
- {validation_warn, Reason :: binary()} |
- {error, error_reason()}).
-```
-
-Related types: [confd\_trans\_ctx()](#confd_trans_ctx-0), [error\_reason()](#error_reason-0), [ikeypath()](#ikeypath-0), [value()](#value-0)
-
-It is the callback for #confd_valpoint_cb.validate.
-
-
-### cb_validate_value/0
-
-```erlang
--type cb_validate_value() ::
- fun((TypeCtx :: term(), Value :: value()) ->
- ok | error | {error, Reason :: binary()} | none()).
-```
-
-Related types: [value()](#value-0)
-
-The callback for #confd_type_cbs.validate. The TypeCtx argument is currently unused (passed as 'undefined'). The function may fail - this is equivalent to returning 'error'.
-
-
-### cb_write/0
-
-```erlang
--type cb_write() ::
- fun((confd_trans_ctx()) ->
- ok |
- {ok, confd_trans_ctx()} |
- {error, error_reason()} |
- confd_in_use).
-```
-
-Related types: [confd\_trans\_ctx()](#confd_trans_ctx-0), [error\_reason()](#error_reason-0)
-
-The callback for #confd_trans_cbs.write_start and #confd_trans_cbs.prepare. The confd_in_use return value is equivalent to \{error, #confd_error\{ code = in_use \}\}.
-
-
-### cb_write_all/0
-
-```erlang
--type cb_write_all() ::
- fun((T :: confd_trans_ctx(), KP :: ikeypath()) ->
- ok |
- {ok, confd_trans_ctx()} |
- {error, error_reason()} |
- delayed_response).
-```
-
-Related types: [confd\_trans\_ctx()](#confd_trans_ctx-0), [error\_reason()](#error_reason-0), [ikeypath()](#ikeypath-0)
-
-This is the callback for #confd_data_cbs.write_all. The KP argument is currently always [], since the callback does not pertain to any particular data node.
-
-
-### cmp_op/0
-
-```erlang
--type cmp_op() :: 0 | 1 | 2 | 3 | 4 | 5 | 6.
-```
-
-### confd_trans_ctx/0
-
-```erlang
--type confd_trans_ctx() :: #confd_trans_ctx{}.
-```
-
-### connect_result/0
-
-```erlang
--type connect_result() ::
- {ok, socket()} | {error, error_reason()} | {error, atom()}.
-```
-
-Related types: [error\_reason()](#error_reason-0), [socket()](#socket-0)
-
-This is the return type of connect() function.
-
-
-### datetime/0
-
-```erlang
--type datetime() :: {C_DATETIME :: integer(), datetime_date_and_time()}.
-```
-
-Related types: [datetime\_date\_and\_time()](#datetime_date_and_time-0)
-
-The value representation for yang:date-and-time, also used in the API functions for notification streams.
-
-
-### datetime_date_and_time/0
-
-```erlang
--type datetime_date_and_time() ::
- {Year :: integer(),
- Month :: integer(),
- Day :: integer(),
- Hour :: integer(),
- Minute :: integer(),
- Second :: integer(),
- MicroSecond :: integer(),
- TZ :: integer(),
- TZMinutes :: integer()}.
-```
-
-### error_reason/0
-
-```erlang
--type error_reason() :: binary() | #confd_error{} | tuple().
-```
-
-The callback functions may return errors either as a plain string or via a #confd_error\{\} record - see econfd.hrl and the section EXTENDED ERROR REPORTING in confd_lib_lib(3) (tuple() is only for internal ConfD/NCS use). \{error, String\} is equivalent to \{error, #confd_error\{ code = application, str = String \}\}.
-
-
-### exec_op/0
-
-```erlang
--type exec_op() :: 7 | 8 | 9 | 10 | 11 | 13 | 12.
-```
-
-### ikeypath/0
-
-```erlang
--type ikeypath() :: [qtag() | key()].
-```
-
-Related types: [key()](#key-0), [qtag()](#qtag-0)
-
-An ikeypath() is a list describing a path down into the data tree. The Ikeypaths are used to denote specific objects in the XML instance document. The list is in backwards order, thus the head of the list is the leaf element. All the data callbacks defined in #confd_data_cbs\{\} receive ikeypath() lists as an argument. The last (top) element of the list is a pair `[NS|XmlTag]` where NS is the atom defining the XML namespace of the XmlTag and XmlTag is an XmlTag::atom() denoting the toplevel XML element. Elements in the list that have a different namespace than their parent are also qualified through such a pair with the element's namespace, but all other elements are represented by their unqualified tag() atom. Thus an ikeypath() uniquely addresses an instance of an element in the configuration XML tree. List entries are identified by an element in the ikeypath() list expressed as \{Key\} or, when we are using CDB, as \[Integer]. During an individual CDB session all the elements are implictly numbered, thus we can through a call to econfd_cdb:num_instances/2 retrieve how many entries (N) for a given list that we have, and then retrieve those entries (0 - (N-1)) inserting \[I] as the key.
-
-
-### ip/0
-
-```erlang
--type ip() :: ipv4() | ipv6().
-```
-
-Related types: [ipv4()](#ipv4-0), [ipv6()](#ipv6-0)
-
-### ipv4/0
-
-```erlang
--type ipv4() :: {0..255, 0..255, 0..255, 0..255}.
-```
-
-### ipv6/0
-
-```erlang
--type ipv6() ::
- {0..65535,
- 0..65535,
- 0..65535,
- 0..65535,
- 0..65535,
- 0..65535,
- 0..65535,
- 0..65535}.
-```
-
-### key/0
-
-```erlang
--type key() :: {value()} | [Index :: integer()].
-```
-
-Related types: [value()](#value-0)
-
-Keys are parts of ikeypath(). In the YANG data model we define how many keys a list node has. If we have 1 key, the key is an arity-1 tuple, 2 keys - an arity-2 tuple and so forth. The \[Index] notation is only valid for keys in ikeypaths when we use CDB.
-
-
-### list_filter_op/0
-
-```erlang
--type list_filter_op() :: cmp_op() | exec_op().
-```
-
-Related types: [cmp\_op()](#cmp_op-0), [exec\_op()](#exec_op-0)
-
-### list_filter_type/0
-
-```erlang
--type list_filter_type() :: 0 | 1 | 2 | 3 | 4 | 5 | 6.
-```
-
-### namespace/0
-
-```erlang
--type namespace() :: atom().
-```
-
-### objects/0
-
-```erlang
--type objects() :: [vals_next() | tag_val_object_next() | false].
-```
-
-Related types: [tag\_val\_object\_next()](#tag_val_object_next-0), [vals\_next()](#vals_next-0)
-
-### qtag/0
-
-```erlang
--type qtag() :: tag() | tag_cons(namespace(), tag()).
-```
-
-Related types: [namespace()](#namespace-0), [tag()](#tag-0), [tag\_cons()](#tag_cons-2)
-
-A "qualified tag" is either a single tag or a pair of a namespace and a tag. An example could be 'interface' or \['http://example.com/ns/interfaces/2.1' | interface]
-
-
-### socket/0
-
-```erlang
--type socket() ::
- {gen_tcp, gen_tcp:socket()} |
- {local_ipc, socket:socket()} |
- int_ipc:sock().
-```
-
-### tag/0
-
-```erlang
--type tag() :: atom().
-```
-
-### tag_cons/2
-
-```erlang
--type tag_cons(T1, T2) :: nonempty_improper_list(T1, T2).
-```
-
-### tag_val_object/0
-
-```erlang
--type tag_val_object() :: {exml, [TV :: tagval()]}.
-```
-
-Related types: [tagval()](#tagval-0)
-
-### tag_val_object_next/0
-
-```erlang
--type tag_val_object_next() :: {tag_val_object(), Next :: term()}.
-```
-
-Related types: [tag\_val\_object()](#tag_val_object-0)
-
-### tagpath/0
-
-```erlang
--type tagpath() :: [qtag()].
-```
-
-Related types: [qtag()](#qtag-0)
-
-A tagpath() is a list describing a path down into the schema tree. I.e. as opposed to an ikeypath(), it has no instance information. Additionally the last (top) element is not `[NS|XmlTag]` as in ikeypath(), but only `XmlTag` \- i.e. it needs to be combined with a namespace to uniquely identify a schema node. The other elements in the path are qualified - or not - exactly as for ikeypath().
-
-
-### tagval/0
-
-```erlang
--type tagval() ::
- {qtag(),
- value() |
- start |
- {start, Index :: integer()} |
- stop | leaf | delete}.
-```
-
-Related types: [qtag()](#qtag-0), [value()](#value-0)
-
-This is used to represent XML elements together with their values, typically in a list representing an XML subtree as in the arguments and result of the 'action' callback. Typeless elements have the special "values":
-
-* `start` \- opening container or list element.
-* `{start, Index :: integer()}` \- opening list element with CDB Index instead of key value(s) - only valid for CDB access.
-* `stop` \- closing container or list element.
-* `leaf` \- leaf with type "empty".
-* `delete` \- delete list entry.
-
-The qtag() tuple element may have the namespace()-less form (i.e. tag()) for XML elements in the "current" namespace. For a detailed description of how to represent XML as a list of tagval() elements, please refer to the "Tagged Value Array" specification in the XML STRUCTURES section of the confd_types(3) manual page.
-
-
-### transport_error/0
-
-```erlang
--type transport_error() :: timeout | closed.
-```
-
-### type/0
-
-```erlang
--type type() :: term().
-```
-
-Identifies a type definition in the schema.
-
-
-### vals/0
-
-```erlang
--type vals() :: [V :: value()].
-```
-
-Related types: [value()](#value-0)
-
-### vals_next/0
-
-```erlang
--type vals_next() :: {vals(), Next :: term()}.
-```
-
-Related types: [vals()](#vals-0)
-
-### value/0
-
-```erlang
--type value() ::
- binary() |
- tuple() |
- float() |
- boolean() |
- integer() |
- qtag() |
- {Tag :: integer(), Value :: term()} |
- [value()] |
- not_found | default.
-```
-
-Related types: [qtag()](#qtag-0), [value()](#value-0)
-
-This type is central for this library. Values are returned from the CDB functions, they are used to read and write in the MAAPI module and they are also used as keys in ikeypath().
-
-We have the following value representation for the data model types
-
-* string - Always represented as a single binary.
-* int32 - This is represented as a single integer.
-* int8 - \{?C_INT8, Val\}
-* int16 - \{?C_INT16, Val\}
-* int64 - \{?C_INT64, Val\}
-* uint8 - \{?C_UINT8, Val\}
-* uint16 - \{?C_UINT16, Val\}
-* uint32 - \{?C_UINT32, Val\}
-* uint64 - \{?C_UINT64, Val\}
-* inet:ipv4-address - 4-tuple
-* inet:ipv4-address-no-zone - 4-tuple
-* inet:ipv6-address - 8-tuple
-* inet:ipv6-address-no-zone - 8-tuple
-* boolean - The atoms 'true' or 'false'
-* xs:float() and xs:double() - Erlang floats
-* leaf-list - An erlang list of values.
-* binary, yang:hex-string, tailf:hex-list (etc) - \{?C_BINARY, binary()\}
-* yang:date-and-time - \{?C_DATETIME, datetime_date_and_time()\}
-* xs:duration - \{?C_DURATION, \{Y,M,D,H,M,S,Mcr\}\}
-* instance-identifier - \{?C_OBJECTREF, econfd:ikeypath()\}
-* yang:object-identifier - \{?C_OID, Int32Binary\}, where Int32Binary is a binary with OID compontents as 32-bit integers in the default big endianness.
-* yang:dotted-quad - \{?C_DQUAD, binary()\}
-* yang:hex-string - \{?C_HEXSTR, binary()\}
-* inet:ipv4-prefix - \{?C_IPV4PREFIX, \{\{A,B,C,D\}, PrefixLen\}\}
-* inet:ipv6-prefix - \{?C_IPV6PREFIX, \{\{A,B,C,D,E,F,G,H\}, PrefixLen\}\}
-* tailf:ipv4-address-and-prefix-length - \{?C_IPV4_AND_PLEN, \{\{A,B,C,D\}, PrefixLen\}\}
-* tailf:ipv6-address-and-prefix-length - \{?C_IPV6_AND_PLEN, \{\{A,B,C,D,E,F,G,H\}, PrefixLen\}\}
-* decimal64 - \{?C_DECIMAL64, \{Int64, FractionDigits\}\}
-* identityref - \{?C_IDENTITYREF, \{NsHash, IdentityHash\}\}
-* bits - \{?C_BIT32, Bits::integer()\}, \{?C_BIT64, Bits::integer()\}, or \{?C_BITBIG, Bits:binary()\} depending on the highest bit position assigned
-* enumeration - \{?C_ENUM_VALUE, IntVal\}, where IntVal is the integer value for a given "enum" statement according to the YANG specification. When we have compiled a YANG module into a .fxs file, we can use the --emit-hrl option to confdc(1) to create a .hrl file with macro definitions for the enum values.
-* empty - \{?C_EMPTY, 0\}. This is applicable for type empty in union, and type empty on list keys. Type empty on a leaf without a union is not represented by a value, only existence checks can be done.
-
-There is also a "pseudo type" that indicates a non-existing value, which is represented as the atom 'not_found'. Finally there is a "pseudo type" to indicate that a leaf with a default value defined in the data model does not have a value set - this is represented as the atom 'default'.
-
-For all of the abovementioned (non-"pseudo") types we have the corresponding macro in econfd.hrl. We strongly suggest that the ?CONFD_xxx macros are used whenever we either want to construct a value or match towards a value: Thus we write code as:
-
-```text
- case econfd_cdb:get_elem(...) of
- {ok, ?CONFD_INT64(42)} ->
- foo;
-
- or
- econfd_cdb:set_elem(... ?CONFD_INT64(777), ...)
-
- or
- {ok, ?CONFD_INT64(I)} = econfd_cdb:get_elem(...)
-
-
-```
-
-
-## Functions
-
-### action_set_timeout/2
-
-```erlang
--spec action_set_timeout(Uinfo :: #confd_user_info{},
- Seconds :: integer()) ->
- ok | {error, Reason :: term()}.
-```
-
-Extend (or shorten) the timeout for the current action callback invocation. The timeout is given in seconds from the point in time when the function is called.
-
-
-### bitbig_bin2bm/1
-
-```erlang
-bitbig_bin2bm(Binary)
-```
-
-### bitbig_bit_is_set/2
-
-```erlang
--spec bitbig_bit_is_set(Binary :: binary(), Position :: integer()) ->
- boolean().
-```
-
-Test a bit in a C_BITBIG binary.
-
-
-### bitbig_bm2bin/1
-
-```erlang
-bitbig_bm2bin(Bitmask)
-```
-
-### bitbig_clr_bit/2
-
-```erlang
--spec bitbig_clr_bit(Binary :: binary(), Position :: integer()) ->
- binary().
-```
-
-Clear a bit in a C_BITBIG binary.
-
-
-### bitbig_pad/2
-
-```erlang
-bitbig_pad(Binary, Size)
-```
-
-### bitbig_set_bit/2
-
-```erlang
--spec bitbig_set_bit(Binary :: binary(), Position :: integer()) ->
- binary().
-```
-
-Set a bit in a C_BITBIG binary.
-
-
-### controlling_process/2
-
-```erlang
--spec controlling_process(Socket :: term(), Pid :: pid()) ->
- ok | {error, Reason :: term()}.
-```
-
-Assigns a new controlling process Pid to Socket
-
-
-### data_get_list_filter/1
-
-```erlang
--spec data_get_list_filter(Tctx :: confd_trans_ctx()) ->
- undefined | #confd_list_filter{}.
-```
-
-Related types: [confd\_trans\_ctx()](#confd_trans_ctx-0)
-
-Return list filter for the current operation if any.
-
-
-### data_is_filtered/1
-
-```erlang
--spec data_is_filtered(Tctx :: confd_trans_ctx()) -> boolean().
-```
-
-Related types: [confd\_trans\_ctx()](#confd_trans_ctx-0)
-
-Return true if the filtered flag is set on the transaction.
-
-
-### data_reply_error/2
-
-```erlang
--spec data_reply_error(Tctx :: confd_trans_ctx(),
- Error :: error_reason()) ->
- ok | {error, Reason :: term()}.
-```
-
-Related types: [confd\_trans\_ctx()](#confd_trans_ctx-0), [error\_reason()](#error_reason-0)
-
-Reply an error for delayed_response. Like data_reply_value() - only used in combination with delayed_response.
-
-
-### data_reply_found/1
-
-```erlang
--spec data_reply_found(Tctx :: confd_trans_ctx()) ->
- ok | {error, Reason :: term()}.
-```
-
-Related types: [confd\_trans\_ctx()](#confd_trans_ctx-0)
-
-Reply 'found' for delayed_response. Like data_reply_value() - only used in combination with delayed_response.
-
-
-### data_reply_next_key/3
-
-```erlang
--spec data_reply_next_key(Tctx :: confd_trans_ctx(),
- Key :: key() | false,
- Next :: term()) ->
- ok | {error, Reason :: term()}.
-```
-
-Related types: [confd\_trans\_ctx()](#confd_trans_ctx-0), [key()](#key-0)
-
-Reply with next key for delayed_response. Like data_reply_value() - only used in combination with delayed_response.
-
-
-### data_reply_next_object_tag_value_array/3
-
-```erlang
--spec data_reply_next_object_tag_value_array(Tctx :: confd_trans_ctx(),
- Values :: [TV :: tagval()],
- Next :: term()) ->
- ok |
- {error,
- Reason :: term()}.
-```
-
-Related types: [confd\_trans\_ctx()](#confd_trans_ctx-0), [tagval()](#tagval-0)
-
-Reply with tagged values and next key for delayed_response. Like data_reply_value() - only used in combination with delayed_response, and get_next_object() callback.
-
-
-### data_reply_next_object_value_array/3
-
-```erlang
--spec data_reply_next_object_value_array(Tctx :: confd_trans_ctx(),
- Values ::
- vals() |
- tag_val_object() |
- false,
- Next :: term()) ->
- ok |
- {error, Reason :: term()}.
-```
-
-Related types: [confd\_trans\_ctx()](#confd_trans_ctx-0), [tag\_val\_object()](#tag_val_object-0), [vals()](#vals-0)
-
-Reply with values and next key for delayed_response. Like data_reply_value() - only used in combination with delayed_response, and get_next_object() callback.
-
-
-### data_reply_next_object_value_arrays/3
-
-```erlang
--spec data_reply_next_object_value_arrays(Tctx :: confd_trans_ctx(),
- Objects :: objects(),
- TimeoutMillisecs :: integer()) ->
- ok |
- {error, Reason :: term()}.
-```
-
-Related types: [confd\_trans\_ctx()](#confd_trans_ctx-0), [objects()](#objects-0)
-
-Reply with multiple objects, each with values and next key, plus cache timeout, for delayed_response. Like data_reply_value() - only used in combination with delayed_response, and get_next_object() callback.
-
-
-### data_reply_not_found/1
-
-```erlang
--spec data_reply_not_found(Tctx :: confd_trans_ctx()) ->
- ok | {error, Reason :: term()}.
-```
-
-Related types: [confd\_trans\_ctx()](#confd_trans_ctx-0)
-
-Reply 'not found' for delayed_response. Like data_reply_value() - only used in combination with delayed_response.
-
-
-### data_reply_ok/1
-
-```erlang
--spec data_reply_ok(Tctx :: confd_trans_ctx()) ->
- ok | {error, Reason :: term()}.
-```
-
-Related types: [confd\_trans\_ctx()](#confd_trans_ctx-0)
-
-Reply 'ok' for delayed_response. This function can be used explicitly by the erlang application if a data callback returns the atom delayed_response. In that case it is the responsibility of the application to later invoke one of the data_reply_xxx() functions. If delayed_response is not used, none of the explicit data replying functions need to be used.
-
-
-### data_reply_tag_value_array/2
-
-```erlang
--spec data_reply_tag_value_array(Tctx :: confd_trans_ctx(),
- TagVals :: [tagval()]) ->
- ok | {error, Reason :: term()}.
-```
-
-Related types: [confd\_trans\_ctx()](#confd_trans_ctx-0), [tagval()](#tagval-0)
-
-Reply a list of tagged values for delayed_response. Like data_reply_value() - only used in combination with delayed_response, and get_object() callback.
-
-
-### data_reply_value/2
-
-```erlang
--spec data_reply_value(Tctx :: confd_trans_ctx(), V :: value()) ->
- ok | {error, Reason :: term()}.
-```
-
-Related types: [confd\_trans\_ctx()](#confd_trans_ctx-0), [value()](#value-0)
-
-Reply a value for delayed_response. This function can be used explicitly by the erlang application if a data callback returns the atom delayed_response. In that case it is the responsibility of the application to later invoke one of the data_reply_xxx() functions. If delayed_response is not used, none of the explicit data replying functions need to be used.
-
-
-### data_reply_value_array/2
-
-```erlang
--spec data_reply_value_array(Tctx :: confd_trans_ctx(),
- Values :: vals() | tag_val_object() | false) ->
- ok | {error, Reason :: term()}.
-```
-
-Related types: [confd\_trans\_ctx()](#confd_trans_ctx-0), [tag\_val\_object()](#tag_val_object-0), [vals()](#vals-0)
-
-Reply a list of values for delayed_response. Like data_reply_value() - only used in combination with delayed_response, and get_object() callback.
-
-
-### data_set_filtered/2
-
-```erlang
--spec data_set_filtered(Tctx :: confd_trans_ctx(),
- IsFiltered :: boolean()) ->
- confd_trans_ctx().
-```
-
-Related types: [confd\_trans\_ctx()](#confd_trans_ctx-0)
-
-Set filtered flag on transaction context in the first callback call of a list traversal. This signals that all list entries returned by the data provider for this list traversal match the filter.
-
-
-### data_set_timeout/2
-
-```erlang
--spec data_set_timeout(Tctx :: confd_trans_ctx(), Seconds :: integer()) ->
- ok | {error, Reason :: term()}.
-```
-
-Related types: [confd\_trans\_ctx()](#confd_trans_ctx-0)
-
-Extend (or shorten) the timeout for the current callback invocation. The timeout is given in seconds from the point in time when the function is called.
-
-
-### decrypt/1
-
-```erlang
--spec decrypt(_ :: binary()) ->
- {ok, binary()} |
- {error, {Ecode :: integer(), Reason :: binary()}}.
-```
-
-Decrypts a value of type tailf:aes-256-cfb-128-encrypted-string or tailf:aes-cfb-128-encrypted-string. Requires that econfd_maapi:install_crypto_keys/1 has been called in the node.
-
-
-### init_daemon/5
-
-```erlang
--spec init_daemon(Name :: atom(),
- DebugLevel :: integer(),
- Estream :: io:device(),
- Dopaque :: term(),
- Path :: string()) ->
- {ok, Pid :: pid()} | {error, Reason :: term()}.
-```
-
-Starts and links to a gen_server which connects to ConfD. This gen_server holds two sockets to ConfD, one so called control socket and one worker socket (See confd_lib_dp(3) for an explanation of those sockets.)
-
-To avoid blocking control socket callback requests due to long-running worker socket callbacks, the control socket callbacks are run in the gen_server, while the worker socket callbacks are run in a separate process that is spawned by the gen_server. This means that applications must not share e.g. MAAPI sockets between transactions, since this could result in simultaneous use of a socket by the gen_server and the spawned process.
-
-The gen_server is used to install sets of callback Funs. The gen_server state is a #confd_daemon_ctx\{\}. This structure is passed to all the callback functions.
-
-The daemon context includes a d_opaque element holding the Dopaque term - this can be used by the application to pass application specific data into the callback functions.
-
-The Name::atom() parameter is used in various debug printouts and is also used to uniquely identify the daemon.
-
-The DebugLevel parameter is used to control the debug level. The following levels are available:
-
-* ?CONFD_SILENT No debug printouts whatsoever are produced by the library.
-* ?CONFD_DEBUG Various printouts will occur for various error conditions.
-* ?CONFD_TRACE The execution of callback functions will be traced.
-
-The Estream parameter is used by all printouts from the library.
-
-
-### init_daemon/6
-
-```erlang
--spec init_daemon(Name :: atom(),
- DebugLevel :: integer(),
- Estream :: io:device(),
- Dopaque :: term(),
- Ip :: ip(),
- Port :: integer()) ->
- {ok, Pid :: pid()} | {error, Reason :: term()}.
-```
-
-Related types: [ip()](#ip-0)
-
-### log/2
-
-```erlang
--spec log(Level :: integer(), Fmt :: string()) -> ok.
-```
-
-Logs Fmt to devel.log if running internal, otherwise to standard out. Level can be one of ?CONFD_LEVEL_ERROR | ?CONFD_LEVEL_INFO | ?CONFD_LEVEL_TRACE
-
-
-### log/3
-
-```erlang
--spec log(Level :: integer(), Fmt :: string(), Args :: list()) -> ok.
-```
-
-Logs Fmt with Args to devel.log if running internal, otherwise to standard out. Level can be one of ?CONFD_LEVEL_ERROR | ?CONFD_LEVEL_INFO | ?CONFD_LEVEL_TRACE
-
-
-### log/4
-
-```erlang
--spec log(IoDevice :: io:device(),
- Level :: integer(),
- Fmt :: string(),
- Args :: list()) ->
- ok.
-```
-
-Logs Fmt with Args to devel.log if running internal, otherwise to IoDevice. Level can be one of ?CONFD_LEVEL_ERROR | ?CONFD_LEVEL_INFO | ?CONFD_LEVEL_TRACE
-
-
-### mk_filtered_next/2
-
-```erlang
-mk_filtered_next(Tctx, Next)
-```
-
-### new_worker_socket/2
-
-```erlang
--spec new_worker_socket(UserInfo :: #confd_user_info{},
- SockId :: integer()) ->
- {socket(), #confd_user_info{}} |
- {error,
- timeout | closed | not_owner | badarg |
- inet:posix() |
- any()}.
-```
-
-Related types: [socket()](#socket-0)
-
-Create a new worker socket to be used for an action invocation. When the action invocation ends remove_worker_socket/1 should be called.
-
-
-### notification_replay_complete/1
-
-```erlang
--spec notification_replay_complete(Nctx :: #confd_notification_ctx{}) ->
- ok | {error, Reason :: term()}.
-```
-
-Call this function when replay is done
-
-
-### notification_replay_failed/2
-
-```erlang
--spec notification_replay_failed(Nctx :: #confd_notification_ctx{},
- ErrorString :: binary()) ->
- ok | {error, Reason :: term()}.
-```
-
-Call this function when replay has failed for some reason
-
-
-### notification_send/3
-
-```erlang
--spec notification_send(Nctx :: #confd_notification_ctx{},
- DateTime :: datetime(),
- TagVals :: [tagval()]) ->
- ok | {error, Reason :: term()}.
-```
-
-Related types: [datetime()](#datetime-0), [tagval()](#tagval-0)
-
-Send a notification defined at the top level of a YANG module.
-
-
-### notification_send/4
-
-```erlang
--spec notification_send(Nctx :: #confd_notification_ctx{},
- DateTime :: datetime(),
- TagVals :: [tagval()],
- IKP :: ikeypath()) ->
- ok | {error, Reason :: term()}.
-```
-
-Related types: [datetime()](#datetime-0), [ikeypath()](#ikeypath-0), [tagval()](#tagval-0)
-
-Send a notification defined as a child of a container or list in a YANG 1.1 module. IKP is the fully instantiated path for the parent of the notification in the data tree.
-
-
-### pp_kpath/1
-
-```erlang
--spec pp_kpath(IKP :: ikeypath()) -> iolist().
-```
-
-Related types: [ikeypath()](#ikeypath-0)
-
-Pretty print an ikeypath.
-
-
-### pp_kpath2/1
-
-```erlang
-pp_kpath2(Vs)
-```
-
-### pp_path_value/1
-
-```erlang
-pp_path_value(Val)
-```
-
-### pp_value/1
-
-```erlang
--spec pp_value(V :: value()) -> iolist().
-```
-
-Related types: [value()](#value-0)
-
-Pretty print a value.
-
-
-### process_next_objects/5
-
-```erlang
-process_next_objects(Rest, Ints0, TH, TraversalId, NextFun)
-```
-
-### register_action_cb/2
-
-```erlang
--spec register_action_cb(Daemon :: pid(),
- ActionCbs :: #confd_action_cb{}) ->
- ok | {error, Reason :: term()}.
-```
-
-Register action callback on an actionpoint
-
-
-### register_authentication_cb/2
-
-```erlang
--spec register_authentication_cb(Daemon :: pid(),
- AuthenticationCb ::
- #confd_authentication_cb{}) ->
- ok | {error, Reason :: term()}.
-```
-
-Register authentication callback. Note, this can not be used to *perform* the authentication.
-
-
-### register_data_cb/2
-
-```erlang
--spec register_data_cb(Daemon :: pid(), DbCbs :: #confd_data_cbs{}) ->
- ok | {error, Reason :: term()}.
-```
-
-Register the data callbacks.
-
-
-### register_data_cb/3
-
-```erlang
--spec register_data_cb(Daemon :: pid(),
- DbCbs :: #confd_data_cbs{},
- Flags :: non_neg_integer()) ->
- ok | {error, Reason :: term()}.
-```
-
-Register the data callbacks.
-
-
-### register_db_cbs/2
-
-```erlang
--spec register_db_cbs(Daemon :: pid(), DbCbs :: #confd_db_cbs{}) ->
- ok | {error, Reason :: term()}.
-```
-
-Register extern db callbacks.
-
-
-### register_done/1
-
-```erlang
--spec register_done(Daemon :: pid()) -> ok | {error, Reason :: term()}.
-```
-
-This function must be called when all callback registrations are done.
-
-
-### register_notification_stream/2
-
-```erlang
--spec register_notification_stream(Daemon :: pid(),
- NotifCbs ::
- #confd_notification_stream_cbs{}) ->
- {ok, #confd_notification_ctx{}} |
- {error, Reason :: term()}.
-```
-
-Register notif callbacks on an streamname
-
-
-### register_range_data_cb/5
-
-```erlang
--spec register_range_data_cb(Daemon :: pid(),
- DataCbs :: #confd_data_cbs{},
- Lower :: [Lower :: value()],
- Higher :: [Higher :: value()],
- IKP :: ikeypath()) ->
- ok | {error, Reason :: term()}.
-```
-
-Related types: [ikeypath()](#ikeypath-0), [value()](#value-0)
-
-Register data callbacks for a range of keys.
-
-
-### register_trans_cb/2
-
-```erlang
--spec register_trans_cb(Daemon :: pid(), TransCbs :: #confd_trans_cbs{}) ->
- ok | {error, Reason :: term()}.
-```
-
-Register transaction phase callbacks. See confd_lib_dp(3) for a thorough description of the transaction phases. The record #confd_trans_cbs\{\} contains callbacks for all of the phases for a transaction. If we use this external data api only for statistics data only the init() and the finish() callbacks should be used. The init() callback must return 'ok', \{error, String\}, or \{ok, Tctx\} where Tctx is the same #confd_trans_ctx that was supplied to the init callback but possibly with the opaque field filled in. This field is meant to be used by the user to manage user data.
-
-
-### register_trans_validate_cb/2
-
-```erlang
--spec register_trans_validate_cb(Daemon :: pid(),
- ValidateCbs ::
- #confd_trans_validate_cbs{}) ->
- ok | {error, Reason :: term()}.
-```
-
-Register validation transaction callback. This function maps an init and a finish function for validations. See seme function in confd_lib_dp(3) The init() callback must return 'ok', \{error, String\}, or \{ok, Tctx\} where Tctx is the same #confd_trans_ctx that was supplied to the init callback but possibly with the opaque field filled in.
-
-
-### register_valpoint_cb/2
-
-```erlang
--spec register_valpoint_cb(Daemon :: pid(),
- ValpointCbs :: #confd_valpoint_cb{}) ->
- ok | {error, Reason :: term()}.
-```
-
-Register validation callback on a valpoint
-
-
-### set_daemon_d_opaque/2
-
-```erlang
--spec set_daemon_d_opaque(Daemon :: pid(), Dopaque :: term()) -> ok.
-```
-
-Set the d_opaque field in the daemon which is typically used by the callbacks
-
-
-### set_daemon_flags/2
-
-```erlang
--spec set_daemon_flags(Daemon, Flags) -> ok
- when
- Daemon :: pid(),
- Flags :: non_neg_integer().
-```
-
-Change the flag settings for a daemon. See ?CONFD_DAEMON_FLAG_XXX in econfd.hrl for the available flags. This function should be called immediately after creating the daemon context with init_daemon/6.
-
-
-### set_debug/3
-
-```erlang
--spec set_debug(Daemon :: pid(),
- DebugLevel :: integer(),
- Estream :: io:device()) ->
- ok.
-```
-
-Change the DebugLevel and/or Estream for a running daemon
-
-
-### start/0
-
-```erlang
--spec start() -> ok | {error, Reason :: term()}.
-```
-
-Starts the econfd application.
-
-
-### stop_daemon/1
-
-```erlang
--spec stop_daemon(Daemon :: pid()) -> ok.
-```
-
-Silently stop a daemon
-
-
-### unpad/1
-
-```erlang
-unpad(_)
-```
diff --git a/developer-reference/erlang/econfd_cdb.md b/developer-reference/erlang/econfd_cdb.md
deleted file mode 100644
index 3003fedb..00000000
--- a/developer-reference/erlang/econfd_cdb.md
+++ /dev/null
@@ -1,1047 +0,0 @@
-# Module econfd_cdb
-
-An Erlang interface equivalent to the CDB C-API (documented in confd_lib_cdb(3)).
-
-The econfd_cdb library is used to connect to the ConfD built in XML database, CDB. The purpose of this API to provide a read and subscription API to CDB.
-
-CDB owns and stores the configuration data and the user of the API wants to read that configuration data and also get notified when someone through either NETCONF, the CLI, the Web UI, or MAAPI modifies the data so that the application can re-read the configuration data and act accordingly.
-
-### Paths
-
-In the C lib a path is a string. Assume the following YANG fragment:
-
-```text
- container hosts {
- list host {
- key name;
- leaf name {
- type string;
- }
- leaf domain {
- type string;
- }
- leaf defgw {
- type inet:ip-address;
- }
- container interfaces {
- list interface {
- key name;
- leaf name {
- type string;
- }
- leaf ip {
- type inet:ip-address;
- }
- leaf mask {
- type inet:ip-address;
- }
- leaf enabled {
- type boolean;
- }
- }
- }
- }
- }
-```
-
-Furthermore assume the database is populated with the following data
-
-```text
-
-
- buzz
- tail-f.com
- 192.168.1.1
-
-
- eth0
- 192.168.1.61
- 255.255.255.0
- true
-
-
- eth1
- 10.77.1.44
- 255.255.0.0
- false
-
-
-
-
-```
-
-The format path "/hosts/host\{buzz\}/defgw" refers to the leaf element called defgw of the host whose key (name element) is buzz.
-
-The format path "/hosts/host\{buzz\}/interfaces/interface\{eth0\}/ip" refers to the leaf element called "ip" in the "eth0" interface of the host called "buzz".
-
-In the Erlang CDB and MAAPI interfaces we use ikeypath() lists instead to address individual objects in the XML tree. The IkeyPath is backwards, thus the two above paths are expressed as
-
-```text
- [defgw, {<<"buzz">>}, host, [NS|hosts]]
- [ip, {<<"eth0">>}, interface, interfaces, {<<"buzz">>}, host, [NS|hosts]]
-```
-
-It is possible loop through all entries in a list as in:
-
-```text
- N = econfd_cdb:num_instances(CDB, [host,[NS|hosts]]),
- lists:map(fun(I) ->
- econfd_cdb:get_elem(CDB, [defgw,[I],host,[NS|hosts]]), .......
- end, lists:seq(0, N-1))
-
-```
-
-Thus in the list with length N \[Index] is an implicit key during the life of a CDB read session.
-
-
-## Types
-
-### cdb_sess/0
-
-```erlang
--type cdb_sess() :: #cdb_session{}.
-```
-
-A datastructure which is used as a handle to all the of the access functions
-
-
-### compaction_dbfile/0
-
-```erlang
--type compaction_dbfile() :: 1 | 2 | 3.
-```
-
-CDB files used for compaction. CDB file can be either
-
-* 1 = A.cdb
-* 2 = O.cdb
-* 3 = S.cdb
-
-
-### compaction_info/0
-
-```erlang
--type compaction_info() :: #compaction_info{}.
-```
-
-A datastructure to handle compaction information
-
-
-### dbtype/0
-
-```erlang
--type dbtype() :: 1 | 2 | 3 | 4.
-```
-
-When we open CDB sessions we must choose which database to read or write from/to. These ints are defined in econfd.hrl
-
-
-### err/0
-
-```erlang
--type err() :: {error, {integer(), binary()}} | {error, closed}.
-```
-
-Errors can be either
-
-* \{error, Ecode::integer(), Reason::binary()\} where Ecode is one of the error codes defined in econfd_errors.hrl, and Reason is (possibly empty) textual description
-* \{error, closed\} if the socket gets closed
-
-
-### sub_ns/0
-
-```erlang
--type sub_ns() :: econfd:namespace() | ''.
-```
-
-Related types: [econfd:namespace()](econfd.md#namespace-0)
-
-A namespace or use '' as wildcard (any namespace)
-
-
-### sub_type/0
-
-```erlang
--type sub_type() :: 1 | 2 | 3.
-```
-
-Subscription type
-
-* ?CDB_SUB_RUNNING - commit subscription.
-* ?CDB_SUB_RUNNING_TWOPHASE - two phase subscription, i.e. notification will be received for prepare, commit, and possibly abort.
-* ?CDB_SUB_OPERATIONAL - subscription for changes to CDB operational data.
-
-
-### subscription_sync_type/0
-
-```erlang
--type subscription_sync_type() :: 1 | 2 | 3 | 4.
-```
-
-Return value from the fun passed to wait/3, indicating what to do with further notifications coming from this transaction. These ints are defined in econfd.hrl
-
-
-## Functions
-
-### cd/2
-
-```erlang
--spec cd(CDB, IKeypath) -> Result
- when
- CDB :: cdb_sess(),
- IKeypath :: econfd:ikeypath(),
- Result :: ok | err().
-```
-
-Related types: [cdb\_sess()](#cdb_sess-0), [err()](#err-0), [econfd:ikeypath()](econfd.md#ikeypath-0)
-
-Change the context node of the session.
-
-Note that this function can not be used as an existence test.
-
-
-### close/1
-
-```erlang
--spec close(Cdb_session) -> Result
- when
- Cdb_session :: Socket | CDB,
- Result :: ok | {error, econfd:error_reason()}.
-```
-
-Related types: [econfd:error\_reason()](econfd.md#error_reason-0)
-
-End the session and close the socket.
-
-
-### collect_until/3
-
-```erlang
-collect_until(T, Stop, Sofar)
-```
-
-### connect/0
-
-```erlang
--spec connect() -> econfd:connect_result().
-```
-
-Related types: [econfd:connect\_result()](econfd.md#connect_result-0)
-
-Equivalent to [connect(\{127, 0, 0, 1\})](#connect-1).
-
-
-### connect/1
-
-```erlang
--spec connect(Path) -> econfd:connect_result() when Path :: string();
- (Address) -> econfd:connect_result()
- when Address :: econfd:ip().
-```
-
-Related types: [econfd:connect\_result()](econfd.md#connect_result-0), [econfd:ip()](econfd.md#ip-0)
-
-### connect/2
-
-```erlang
--spec connect(Path, ClientName) -> econfd:connect_result()
- when Path :: string(), ClientName :: binary();
- (Address, Port) -> econfd:connect_result()
- when Address :: econfd:ip(), Port :: non_neg_integer().
-```
-
-Related types: [econfd:connect\_result()](econfd.md#connect_result-0), [econfd:ip()](econfd.md#ip-0)
-
-### connect/3
-
-```erlang
--spec connect(Address, Port, ClientName) -> econfd:connect_result()
- when
- Address :: econfd:ip(),
- Port :: non_neg_integer(),
- ClientName :: binary().
-```
-
-Related types: [econfd:connect\_result()](econfd.md#connect_result-0), [econfd:ip()](econfd.md#ip-0)
-
-### create/2
-
-```erlang
--spec create(CDB, IKeypath) -> ok | err()
- when CDB :: cdb_sess(), IKeypath :: econfd:ikeypath().
-```
-
-Related types: [cdb\_sess()](#cdb_sess-0), [err()](#err-0), [econfd:ikeypath()](econfd.md#ikeypath-0)
-
-Only for CDB operational data: Create the element denoted by IKP.
-
-
-### delete/2
-
-```erlang
--spec delete(CDB, IKeypath) -> ok | err()
- when CDB :: cdb_sess(), IKeypath :: econfd:ikeypath().
-```
-
-Related types: [cdb\_sess()](#cdb_sess-0), [err()](#err-0), [econfd:ikeypath()](econfd.md#ikeypath-0)
-
-Only for CDB operational data: Delete the element denoted by IKP.
-
-
-### diff_iterate/5
-
-```erlang
--spec diff_iterate(CDB, SubPoint, Fun, Flags, State) -> Result
- when
- CDB :: cdb_sess(),
- SubPoint :: pos_integer(),
- Fun ::
- fun((IKeypath, Op, OldValue, Value, State) ->
- {ok, Ret, State} | {error, term()}),
- Flags :: non_neg_integer(),
- State :: term(),
- Result :: {ok, State} | {error, term()}.
-```
-
-Related types: [cdb\_sess()](#cdb_sess-0)
-
-Iterate over changes in CDB after a subscription triggers.
-
-This function can be called from within the fun passed to wait/3. When called it will invoke Fun for each change that matched the Point. If Flags is ?CDB_ITER_WANT_PREV, OldValue will be the previous value (if available). When OldValue or Value is not available (or requested) they will be the atom 'undefined'. When Op == ?MOP_MOVED_AFTER (only for "ordered-by user" list entry), Value == \{\} means that the entry was moved first in the list, otherwise Value is a econfd:key() tuple that identifies the entry it was moved after.
-
-
-### do_connect/2
-
-```erlang
--spec do_connect(Address, ClientName) -> econfd:connect_result()
- when
- Address ::
- #econfd_conn_ip{} | #econfd_conn_local{},
- ClientName :: binary().
-```
-
-Related types: [econfd:connect\_result()](econfd.md#connect_result-0)
-
-Connect to CDB.
-
-If the port is changed it must also be changed in confd.conf A call to cdb_connect() is typically followed by a call to either new_session() for a reading session or a call to subscribe_session() for a subscription socket or calls to any of the write API functions for a data socket. ClientName is a string which confd will use as an identifier when e.g. reporting status.
-
-
-### end_session/1
-
-```erlang
--spec end_session(CDB) -> {ok, econfd:socket()} when CDB :: cdb_sess().
-```
-
-Related types: [cdb\_sess()](#cdb_sess-0), [econfd:socket()](econfd.md#socket-0)
-
-Terminate the session.
-
-This releases the lock on CDB which is active during a read session. Returns a socket that can be re-used in new_session/2 We use connect() to establish a read socket to CDB. When the socket is closed, the read session is ended. We can reuse the same socket for another read session, but we must then end the session and create another session using new_session/2. %% While we have a live CDB read session, CDB is locked for writing. Thus all external entities trying to modify CDB are blocked as long as we have an open CDB read session. It is very important that we remember to either end_session() or close() once we have read what we wish to read.
-
-
-### exists/2
-
-```erlang
--spec exists(CDB, IKeypath) -> Result
- when
- CDB :: cdb_sess(),
- IKeypath :: econfd:ikeypath(),
- Result :: {ok, boolean()} | err().
-```
-
-Related types: [cdb\_sess()](#cdb_sess-0), [err()](#err-0), [econfd:ikeypath()](econfd.md#ikeypath-0)
-
-Checks existense of an object.
-
-Leafs in the data model may be optional, and presence containers and list entries may or may not exist. This function checks whether a node exists in CDB, returning Int == 1 if it exists, Int == 0 if not.
-
-
-### get_case/3
-
-```erlang
--spec get_case(CDB, IKeypath, Choice) -> Result
- when
- CDB :: cdb_sess(),
- IKeypath :: econfd:ikeypath(),
- Choice :: econfd:qtag() | [econfd:qtag()],
- Result :: {ok, econfd:qtag()} | err().
-```
-
-Related types: [cdb\_sess()](#cdb_sess-0), [err()](#err-0), [econfd:ikeypath()](econfd.md#ikeypath-0), [econfd:qtag()](econfd.md#qtag-0)
-
-Returns the current case for a choice.
-
-
-### get_compaction_info/2
-
-```erlang
--spec get_compaction_info(Socket, Dbfile) -> Result
- when
- Socket :: econfd:socket(),
- Dbfile :: compaction_dbfile(),
- Result ::
- {ok, Info} |
- {error, econfd:error_reason()}.
-```
-
-Related types: [compaction\_dbfile()](#compaction_dbfile-0), [econfd:error\_reason()](econfd.md#error_reason-0), [econfd:socket()](econfd.md#socket-0)
-
-Retrieves compaction info on Dbfile.
-
-
-### get_elem/2
-
-```erlang
--spec get_elem(CDB, IKeypath) -> Result
- when
- CDB :: cdb_sess(),
- IKeypath :: econfd:ikeypath(),
- Result :: {ok, econfd:value()} | err().
-```
-
-Related types: [cdb\_sess()](#cdb_sess-0), [err()](#err-0), [econfd:ikeypath()](econfd.md#ikeypath-0), [econfd:value()](econfd.md#value-0)
-
-Read an element.
-
-Note, the C interface has separate get functions for different types.
-
-
-### get_modifications_cli/2
-
-```erlang
--spec get_modifications_cli(CDB, SubPoint) -> Result
- when
- CDB :: cdb_sess(),
- SubPoint :: pos_integer(),
- Result ::
- {ok, CliString} |
- {error, econfd:error_reason()}.
-```
-
-Related types: [cdb\_sess()](#cdb_sess-0), [econfd:error\_reason()](econfd.md#error_reason-0)
-
-Equivalent to [get_modifications_cli(CDB, Point, 0)](#get_modifications_cli-3).
-
-
-### get_modifications_cli/3
-
-```erlang
--spec get_modifications_cli(CDB, SubPoint, Flags) -> Result
- when
- CDB :: cdb_sess(),
- SubPoint :: pos_integer(),
- Flags :: non_neg_integer(),
- Result ::
- {ok, CliString} |
- {error, econfd:error_reason()}.
-```
-
-Related types: [cdb\_sess()](#cdb_sess-0), [econfd:error\_reason()](econfd.md#error_reason-0)
-
-Return Return a string with the CLI commands that corresponds to the changes that triggered subscription.
-
-
-### get_object/2
-
-```erlang
--spec get_object(CDB, IKeypath) -> Result
- when
- CDB :: cdb_sess(),
- IKeypath :: econfd:ikeypath(),
- Result :: {ok, [econfd:value()]} | err().
-```
-
-Related types: [cdb\_sess()](#cdb_sess-0), [err()](#err-0), [econfd:ikeypath()](econfd.md#ikeypath-0), [econfd:value()](econfd.md#value-0)
-
-Returns all the values in a container or list entry.
-
-
-### get_objects/4
-
-```erlang
--spec get_objects(CDB, IKeypath, StartIndex, NumEntries) -> Result
- when
- CDB :: cdb_sess(),
- IKeypath :: econfd:ikeypath(),
- StartIndex :: integer(),
- NumEntries :: integer(),
- Result :: {ok, [[econfd:value()]]} | err().
-```
-
-Related types: [cdb\_sess()](#cdb_sess-0), [err()](#err-0), [econfd:ikeypath()](econfd.md#ikeypath-0), [econfd:value()](econfd.md#value-0)
-
-Returns all the values for NumEntries list entries.
-
-Starting at index StartIndex. The return value has one Erlang list for each YANG list entry, i.e. it is a list of NumEntries lists.
-
-
-### get_phase/1
-
-```erlang
--spec get_phase(Socket) -> Result
- when
- Socket :: econfd:socket(),
- Result :: {ok, {Phase, Type}} | err().
-```
-
-Related types: [err()](#err-0), [econfd:socket()](econfd.md#socket-0)
-
-Get CDB start-phase.
-
-
-### get_txid/1
-
-```erlang
--spec get_txid(Socket) -> Result
- when
- Socket :: econfd:socket(),
- Result :: {ok, PrimaryNode, Now} | {ok, Now}.
-```
-
-Related types: [econfd:socket()](econfd.md#socket-0)
-
-Get CDB transaction id.
-
-When we are a cdb client, and ConfD restarts, we can use this function to retrieve the last CDB transaction id. If it the same as earlier we don't need re-read the CDB data. This is also useful when we're a CDB client in a HA setup.
-
-
-### get_values/3
-
-```erlang
--spec get_values(CDB, IKeypath, Values) -> Result
- when
- CDB :: cdb_sess(),
- IKeypath :: econfd:ikeypath(),
- Values :: [econfd:tagval()],
- Result :: {ok, [econfd:tagval()]} | err().
-```
-
-Related types: [cdb\_sess()](#cdb_sess-0), [err()](#err-0), [econfd:ikeypath()](econfd.md#ikeypath-0), [econfd:tagval()](econfd.md#tagval-0)
-
-Returns the values for the leafs that have the "value" 'not_found' in the Values list.
-
-This can be used to read an arbitrary set of sub-elements of a container or list entry. The return value is a list of the same length as Values, i.e. the requested leafs are in the same position in the returned list as in the Values argument. The elements in the returned list are always "canonical" though, i.e. of the form [`econfd:tagval()`](econfd.md#tagval-0).
-
-
-### ibool/1
-
-```erlang
-ibool(X)
-```
-
-### index/2
-
-```erlang
--spec index(CDB, IKeypath) -> Result
- when
- CDB :: cdb_sess(),
- IKeypath :: econfd:ikeypath(),
- Result :: {ok, integer()} | err().
-```
-
-Related types: [cdb\_sess()](#cdb_sess-0), [err()](#err-0), [econfd:ikeypath()](econfd.md#ikeypath-0)
-
-Returns the position (starting at 0) of the list entry in path.
-
-
-### initiate_journal_compaction/1
-
-```erlang
--spec initiate_journal_compaction(Socket) -> Result
- when
- Socket :: econfd:socket(),
- Result :: ok.
-```
-
-Related types: [econfd:socket()](econfd.md#socket-0)
-
-Initiates a journal compaction on all CDB files.
-
-
-### initiate_journal_dbfile_compaction/2
-
-```erlang
--spec initiate_journal_dbfile_compaction(Socket, Dbfile) -> Result
- when
- Socket ::
- econfd:socket(),
- Dbfile ::
- compaction_dbfile(),
- Result ::
- ok |
- {error,
- econfd:error_reason()}.
-```
-
-Related types: [compaction\_dbfile()](#compaction_dbfile-0), [econfd:error\_reason()](econfd.md#error_reason-0), [econfd:socket()](econfd.md#socket-0)
-
-Initiates a journal compaction on Dbfile.
-
-
-### mk_elem/1
-
-```erlang
-mk_elem(List)
-```
-
-### new_session/2
-
-```erlang
--spec new_session(Socket, Db) -> Result
- when
- Socket :: econfd:socket(),
- Db :: dbtype(),
- Result :: {ok, cdb_sess()} | err().
-```
-
-Related types: [cdb\_sess()](#cdb_sess-0), [dbtype()](#dbtype-0), [err()](#err-0), [econfd:socket()](econfd.md#socket-0)
-
-Initiate a new session using the socket returned by connect().
-
-
-### new_session/3
-
-```erlang
--spec new_session(Socket, Db, Flags) -> Result
- when
- Socket :: econfd:socket(),
- Db :: dbtype(),
- Flags :: non_neg_integer(),
- Result :: {ok, cdb_sess()} | err().
-```
-
-Related types: [cdb\_sess()](#cdb_sess-0), [dbtype()](#dbtype-0), [err()](#err-0), [econfd:socket()](econfd.md#socket-0)
-
-Initiate a new session using the socket returned by connect(), with detailed control via the Flags argument.
-
-
-### next_index/2
-
-```erlang
--spec next_index(CDB, IKeypath) -> Result
- when
- CDB :: cdb_sess(),
- IKeypath :: econfd:ikeypath(),
- Result :: {ok, integer()} | err().
-```
-
-Related types: [cdb\_sess()](#cdb_sess-0), [err()](#err-0), [econfd:ikeypath()](econfd.md#ikeypath-0)
-
-Returns the position (starting at 0) of the list entry after the given path (which can be non-existing, and if multiple keys the last keys can be '*').
-
-
-### num_instances/2
-
-```erlang
--spec num_instances(CDB, IKeypath) -> Result
- when
- CDB :: cdb_sess(),
- IKeypath :: econfd:ikeypath(),
- Result :: {ok, non_neg_integer()} | err().
-```
-
-Related types: [cdb\_sess()](#cdb_sess-0), [err()](#err-0), [econfd:ikeypath()](econfd.md#ikeypath-0)
-
-Returns the number of entries in a list.
-
-
-### parse_keystring0/1
-
-```erlang
-parse_keystring0(Str)
-```
-
-### request/2
-
-```erlang
-request(CDB, Op)
-```
-
-### request/3
-
-```erlang
-request(CDB, Op, Arg)
-```
-
-### set_case/4
-
-```erlang
--spec set_case(CDB, IKeypath, Choice, Case) -> ok | err()
- when
- CDB :: cdb_sess(),
- IKeypath :: econfd:ikeypath(),
- Choice :: econfd:qtag() | [econfd:qtag()],
- Case :: econfd:qtag().
-```
-
-Related types: [cdb\_sess()](#cdb_sess-0), [err()](#err-0), [econfd:ikeypath()](econfd.md#ikeypath-0), [econfd:qtag()](econfd.md#qtag-0)
-
-Only for CDB operational data: Set the case for a choice.
-
-
-### set_elem/3
-
-```erlang
--spec set_elem(CDB, Value, IKeypath) -> ok | err()
- when
- CDB :: cdb_sess(),
- Value :: econfd:value(),
- IKeypath :: econfd:ikeypath().
-```
-
-Related types: [cdb\_sess()](#cdb_sess-0), [err()](#err-0), [econfd:ikeypath()](econfd.md#ikeypath-0), [econfd:value()](econfd.md#value-0)
-
-Only for CDB operational data: Write Value into CDB.
-
-
-### set_elem2/3
-
-```erlang
--spec set_elem2(CDB, ValueBin, IKeypath) -> ok | err()
- when
- CDB :: cdb_sess(),
- ValueBin :: binary(),
- IKeypath :: econfd:ikeypath().
-```
-
-Related types: [cdb\_sess()](#cdb_sess-0), [err()](#err-0), [econfd:ikeypath()](econfd.md#ikeypath-0)
-
-Only for CDB operational data: Write ValueBin into CDB. ValueBin is the textual value representation.
-
-
-### set_object/3
-
-```erlang
--spec set_object(CDB, ValueList, IKeypath) -> ok | err()
- when
- CDB :: cdb_sess(),
- ValueList :: [econfd:value()],
- IKeypath :: econfd:ikeypath().
-```
-
-Related types: [cdb\_sess()](#cdb_sess-0), [err()](#err-0), [econfd:ikeypath()](econfd.md#ikeypath-0), [econfd:value()](econfd.md#value-0)
-
-Only for CDB operational data: Write an entire object, i.e. YANG list entry or container.
-
-
-### set_values/3
-
-```erlang
--spec set_values(CDB, ValueList, IKeypath) -> ok | err()
- when
- CDB :: cdb_sess(),
- ValueList :: [econfd:tagval()],
- IKeypath :: econfd:ikeypath().
-```
-
-Related types: [cdb\_sess()](#cdb_sess-0), [err()](#err-0), [econfd:ikeypath()](econfd.md#ikeypath-0), [econfd:tagval()](econfd.md#tagval-0)
-
-Only for CDB operational data: Write a list of tagged values.
-
-This function is an alternative to set_object/3, and allows for writing more complex structures (e.g. multiple entries in a list).
-
-
-### subscribe/3
-
-```erlang
--spec subscribe(CDB, Priority, MatchKeyString) -> Result
- when
- CDB :: cdb_sess(),
- Priority :: integer(),
- MatchKeyString :: string(),
- Result :: {ok, SubPoint} | err().
-```
-
-Related types: [cdb\_sess()](#cdb_sess-0), [err()](#err-0)
-
-Equivalent to [subscribe(CDB, Prio, '', MatchKeyString)](#subscribe-4).
-
-
-### subscribe/4
-
-```erlang
--spec subscribe(CDB, Priority, Ns, MatchKeyString) -> Result
- when
- CDB :: cdb_sess(),
- Priority :: integer(),
- Ns :: sub_ns(),
- MatchKeyString :: string(),
- Result :: {ok, SubPoint} | err().
-```
-
-Related types: [cdb\_sess()](#cdb_sess-0), [err()](#err-0), [sub\_ns()](#sub_ns-0)
-
-Set up a CDB configuration subscription.
-
-A CDB subscription means that we are notified when CDB changes. We can have multiple subscription points. Each subscription point is defined through a path corresponding to the paths we use for read operations, however they are in string form and allow formats that aren't possible in a proper ikeypath(). It is possible to indicate namespaces in the path with a prefix notation (see last example) - this is only necessary if there are multiple elements with the same name (in different namespaces) at some level in the path, though.
-
-We can subscribe either to specific leaf elements or entire subtrees. Subscribing to list entries can be done using fully qualified paths, or tagpaths to match multiple entries. A path which isn't a leaf element automatically matches the subtree below that path. When specifying keys to a list entry it is possible to use the wildcard character * which will match any key value.
-
-Some examples:
-
-* /hosts
-
- Means that we subscribe to any changes in the subtree - rooted at "/hosts". This includes additions or removals of "host" entries as well as changes to already existing "host" entries.
-* /hosts/host\{www\}/interfaces/interface\{eth0\}/ip
-
- Means we are notified when host "www" changes its IP address on "eth0".
-* /hosts/host/interfaces/interface/ip
-
- Means we are notified when any host changes any of its IP addresses.
-* /hosts/host/interfaces
-
- Means we are notified when either an interface is added/removed or when an individual leaf element in an existing interface is changed.
-* /hosts/host/types:data
-
- Means we are notified when any host changes the contents of its "data" element, where "data" is an element from a namespace with the prefix "types". The prefix is normally not necessary, see above.
-
-The priority value is an integer. When CDB is changed, the change is performed inside a transaction. Either a commit operation from the CLI or a candidate-commit operation in NETCONF means that the running database is changed. These changes occur inside a ConfD transaction. CDB will handle the subscriptions in lock-step priority order. First all subscribers at the lowest priority are handled, once they all have synchronized via the return value from the fun passed to wait/3, the next set - at the next priority level - is handled by CDB.
-
-Operational and configuration subscriptions can be done on the same socket, but in that case the notifications may be arbitrarily interleaved, including operational notifications arriving between different configuration notifications for the same transaction. If this is a problem, use separate sessions for operational and configuration subscriptions.
-
-The namespace argument specifies the toplevel namespace, i.e. the namespace for the first element in the path. The namespace is optional, 0 can be used as "don't care" value.
-
-subscribe() returns a subscription point which is an integer. This integer value is used later in wait/3 to identify this particular subscription.
-
-
-### subscribe/5
-
-```erlang
--spec subscribe(CDB, Type, Priority, Ns, MatchKeyString) -> Result
- when
- CDB :: cdb_sess(),
- Type :: sub_type(),
- Priority :: integer(),
- Ns :: sub_ns(),
- MatchKeyString :: string(),
- Result :: {ok, SubPoint} | err().
-```
-
-Related types: [cdb\_sess()](#cdb_sess-0), [err()](#err-0), [sub\_ns()](#sub_ns-0), [sub\_type()](#sub_type-0)
-
-Equivalent to [subscribe(CDB, Type, 0, Prio, Ns, MatchKeyString)](#subscribe-6).
-
-
-### subscribe/6
-
-```erlang
--spec subscribe(CDB, Type, Flags, Priority, Ns, MatchKeyString) ->
- Result
- when
- CDB :: cdb_sess(),
- Type :: sub_type(),
- Flags :: non_neg_integer(),
- Priority :: integer(),
- Ns :: sub_ns(),
- MatchKeyString :: string(),
- Result :: {ok, SubPoint} | err().
-```
-
-Related types: [cdb\_sess()](#cdb_sess-0), [err()](#err-0), [sub\_ns()](#sub_ns-0), [sub\_type()](#sub_type-0)
-
-Generalized subscription.
-
-Where Type is one of
-
-* ?CDB_SUB_RUNNING - traditional commit subscription, same as subscribe/4.
-* ?CDB_SUB_RUNNING_TWOPHASE - two phase subscription, i.e. notification will be received for prepare, commit, and possibly abort.
-* ?CDB_SUB_OPERATIONAL - subscription for changes to CDB operational data.
-
-Flags is either 0 or:
-
-* ?CDB_SUB_WANT_ABORT_ON_ABORT - normally if a subscriber is the one to abort a transaction it will not receive an abort notification. This flags means that this subscriber wants an abort notification even if it originated the abort.
-
-
-### subscribe_done/1
-
-```erlang
--spec subscribe_done(CDB) -> ok | err() when CDB :: cdb_sess().
-```
-
-Related types: [cdb\_sess()](#cdb_sess-0), [err()](#err-0)
-
-After a subscriber is done with all subscriptions and ready to receive updates this subscribe_done/1 must be called. Until it is no notifications will be delivered.
-
-
-### subscribe_session/1
-
-```erlang
--spec subscribe_session(Socket) -> {ok, cdb_sess()}
- when Socket :: econfd:socket().
-```
-
-Related types: [cdb\_sess()](#cdb_sess-0), [econfd:socket()](econfd.md#socket-0)
-
-Initialize a subscription socket.
-
-This is a socket that is used to receive notifications about updates to the database. A subscription socket is used in the subscribe() function.
-
-
-### sync_subscription_socket/4
-
-```erlang
-sync_subscription_socket(CDB, SyncType, TimeOut, Fun)
-```
-
-### trigger_oper_subscriptions/1
-
-```erlang
--spec trigger_oper_subscriptions(Socket) -> ok | err()
- when Socket :: econfd:socket().
-```
-
-Related types: [err()](#err-0), [econfd:socket()](econfd.md#socket-0)
-
-Equivalent to [trigger_oper_subscriptions(Socket, all)](#trigger_oper_subscriptions-2).
-
-
-### trigger_oper_subscriptions/2
-
-```erlang
--spec trigger_oper_subscriptions(Socket, SubPoints) -> ok | err()
- when
- Socket :: econfd:socket(),
- SubPoints ::
- [pos_integer()] | all.
-```
-
-Related types: [err()](#err-0), [econfd:socket()](econfd.md#socket-0)
-
-Equivalent to [trigger_oper_subscriptions(Socket, SubPoints, 0)](#trigger_oper_subscriptions-3).
-
-
-### trigger_oper_subscriptions/3
-
-```erlang
--spec trigger_oper_subscriptions(Socket, SubPoints, Flags) -> ok | err()
- when
- Socket :: econfd:socket(),
- SubPoints ::
- [pos_integer()] | all,
- Flags :: non_neg_integer().
-```
-
-Related types: [err()](#err-0), [econfd:socket()](econfd.md#socket-0)
-
-Trigger CDB operational subscribers as if an update in oper data had been done.
-
-Flags can be given as ?CDB_LOCK_WAIT to have the call wait until the subscription lock becomes available, otherwise it should be 0.
-
-
-### trigger_subscriptions/1
-
-```erlang
--spec trigger_subscriptions(Socket) -> ok | err()
- when Socket :: econfd:socket().
-```
-
-Related types: [err()](#err-0), [econfd:socket()](econfd.md#socket-0)
-
-Equivalent to [trigger_subscriptions(Socket, all)](#trigger_subscriptions-2).
-
-
-### trigger_subscriptions/2
-
-```erlang
--spec trigger_subscriptions(Socket, SubPoints) -> ok | err()
- when
- Socket :: econfd:socket(),
- SubPoints :: [pos_integer()] | all.
-```
-
-Related types: [err()](#err-0), [econfd:socket()](econfd.md#socket-0)
-
-Trigger CDB subscribers as if an update in the configuration had been done.
-
-
-### wait/3
-
-```erlang
--spec wait(CDB, TimeOut, Fun) -> Result
- when
- CDB :: cdb_sess(),
- TimeOut :: integer() | infinity,
- Fun ::
- fun((SubPoints) ->
- close | subscription_sync_type()) |
- fun((Type, Flags, SubPoints) ->
- close |
- subscription_sync_type() |
- {error, econfd:error_reason()}),
- Result ::
- ok |
- {error, badretval} |
- {error, econfd:transport_error()} |
- {error, econfd:error_reason()}.
-```
-
-Related types: [cdb\_sess()](#cdb_sess-0), [subscription\_sync\_type()](#subscription_sync_type-0), [econfd:error\_reason()](econfd.md#error_reason-0), [econfd:transport\_error()](econfd.md#transport_error-0)
-
-Wait for subscription events.
-
-The fun will be given a list of the subscription points that triggered, and in the arity-3 case also Type and Flags for the notification. There can be several points if we have issued several subscriptions at the same priority.
-
-Type is one of:
-
-* ?CDB_SUB_PREPARE - notification for the prepare phase
-* ?CDB_SUB_COMMIT - notification for the commit phase
-* ?CDB_SUB_ABORT - notification for abort when prepare failed
-* ?CDB_SUB_OPER - notification for changes to CDB operational data
-
-Flags is the 'bor' of zero or more of:
-
-* ?CDB_SUB_FLAG_IS_LAST - the last notification of its type for this session
-* ?CDB_SUB_FLAG_TRIGGER - the notification was artificially triggered
-* ?CDB_SUB_FLAG_REVERT - the notification is due to revert of a confirmed commit
-* ?CDB_SUB_FLAG_HA_SYNC - the cause of the subscription notification is initial synchronization of a HA secondary from CDB on the primary.
-* ?CDB_SUB_FLAG_HA_IS_SECONDARY - the system is currently in HA SECONDARY mode.
-
-The fun can return the atom 'close' if we wish to close the socket and return from wait/3. Otherwise there are three different types of synchronization replies the application can use as return values from either the arity-1 or the arity-3 fun:
-
-* ?CDB_DONE_PRIORITY This means that the application has acted on the subscription notification and CDB can continue to deliver further notifications.
-* ?CDB_DONE_SOCKET This means that we are done. But regardless of priority, CDB shall not send any further notifications to us on our socket that are related to the currently executing transaction.
-* ?CDB_DONE_TRANSACTION This means that CDB should not send any further notifications to any subscribers - including ourselves - related to the currently executing transaction.
-* ?CDB_DONE_OPERATIONAL This should be used when a subscription notification for operational data has been read. It is the only type that should be used in this case, since the operational data does not have transactions and the notifications do not have priorities.
-
-Finally the arity-3 fun can, when Type == ?CDB_SUB_PREPARE, return an error either as \{error, binary()\} or as \{error, #confd_error\{\}\} (\{error, tuple()\} is only for internal ConfD/NCS use). This will cause the commit of the current transaction to be aborted.
-
-CDB is locked for writing while config subscriptions are delivered.
-
-When wait/3 returns \{error, timeout\} the connection (and its subscriptions) is still active and the application needs to call wait/3 again. But if wait/3 returns ok or \{error, Reason\} the connection to ConfD is closed and all subscription points associated with it are cleared.
-
-
-### wait_start/1
-
-```erlang
--spec wait_start(Socket) -> ok | err() when Socket :: econfd:socket().
-```
-
-Related types: [err()](#err-0), [econfd:socket()](econfd.md#socket-0)
-
-Wait for CDB to become available (reach start-phase one).
-
-
-### xx/2
-
-```erlang
-xx(Str, Acc)
-```
-
-### xx/3
-
-```erlang
-xx(T, Sofar, Acc)
-```
-
-### yy/1
-
-```erlang
-yy(Str)
-```
-
-### yy/2
-
-```erlang
-yy(T, Sofar)
-```
diff --git a/developer-reference/erlang/econfd_ha.md b/developer-reference/erlang/econfd_ha.md
deleted file mode 100644
index d5064b07..00000000
--- a/developer-reference/erlang/econfd_ha.md
+++ /dev/null
@@ -1,200 +0,0 @@
-# Module econfd_ha
-
-An Erlang interface equivalent to the HA C-API (documented in confd_lib_ha(3)).
-
-
-## Types
-
-### ha_node/0
-
-```erlang
--type ha_node() :: #ha_node{}.
-```
-
-## Functions
-
-### bemaster/2
-
-```erlang
--spec bemaster(Socket, NodeId) -> Result
- when
- Socket :: econfd:socket(),
- NodeId :: econfd:value(),
- Result :: ok | {error, econfd:error_reason()}.
-```
-
-Related types: [econfd:error\_reason()](econfd.md#error_reason-0), [econfd:socket()](econfd.md#socket-0), [econfd:value()](econfd.md#value-0)
-
-Instruct a HA node to be primary in the cluster.
-
-
-### benone/1
-
-```erlang
--spec benone(Socket) -> Result
- when
- Socket :: econfd:socket(),
- Result :: ok | {error, econfd:error_reason()}.
-```
-
-Related types: [econfd:error\_reason()](econfd.md#error_reason-0), [econfd:socket()](econfd.md#socket-0)
-
-Instruct a HA node to be nothing in the cluster.
-
-
-### beprimary/2
-
-```erlang
--spec beprimary(Socket, NodeId) -> Result
- when
- Socket :: econfd:socket(),
- NodeId :: econfd:value(),
- Result :: ok | {error, econfd:error_reason()}.
-```
-
-Related types: [econfd:error\_reason()](econfd.md#error_reason-0), [econfd:socket()](econfd.md#socket-0), [econfd:value()](econfd.md#value-0)
-
-Instruct a HA node to be primary in the cluster.
-
-
-### berelay/1
-
-```erlang
--spec berelay(Socket) -> Result
- when
- Socket :: econfd:socket(),
- Result :: ok | {error, econfd:error_reason()}.
-```
-
-Related types: [econfd:error\_reason()](econfd.md#error_reason-0), [econfd:socket()](econfd.md#socket-0)
-
-Instruct a HA secondary to be a relay for other secondaries.
-
-
-### besecondary/4
-
-```erlang
--spec besecondary(Socket, NodeId, PrimaryNodeId, WaitReplyBool) ->
- Result
- when
- Socket :: econfd:socket(),
- NodeId :: econfd:value(),
- PrimaryNodeId :: ha_node(),
- WaitReplyBool :: integer(),
- Result :: ok | {error, econfd:error_reason()}.
-```
-
-Related types: [ha\_node()](#ha_node-0), [econfd:error\_reason()](econfd.md#error_reason-0), [econfd:socket()](econfd.md#socket-0), [econfd:value()](econfd.md#value-0)
-
-Instruct a HA node to be secondary in the cluster where PrimaryNodeId is primary.
-
-
-### beslave/4
-
-```erlang
--spec beslave(Socket, NodeId, PrimaryNodeId, WaitReplyBool) -> Result
- when
- Socket :: econfd:socket(),
- NodeId :: econfd:value(),
- PrimaryNodeId :: ha_node(),
- WaitReplyBool :: integer(),
- Result :: ok | {error, econfd:error_reason()}.
-```
-
-Related types: [ha\_node()](#ha_node-0), [econfd:error\_reason()](econfd.md#error_reason-0), [econfd:socket()](econfd.md#socket-0), [econfd:value()](econfd.md#value-0)
-
-Instruct a HA node to be secondary in the cluster where PrimaryNodeId is primary.
-
-
-### close/1
-
-```erlang
--spec close(Socket) -> Result
- when
- Socket :: econfd:socket(),
- Result :: ok | {error, econfd:error_reason()}.
-```
-
-Related types: [econfd:error\_reason()](econfd.md#error_reason-0), [econfd:socket()](econfd.md#socket-0)
-
-Close the HA connection.
-
-
-### connect/2
-
-```erlang
--spec connect(Path, Token) -> econfd:connect_result()
- when Path :: string(), Token :: binary();
- (Address, Token) -> econfd:connect_result()
- when Address :: econfd:ip(), Token :: binary().
-```
-
-Related types: [econfd:connect\_result()](econfd.md#connect_result-0), [econfd:ip()](econfd.md#ip-0)
-
-### connect/3
-
-```erlang
-connect(Address, Port, Token)
-```
-
-### do_connect/2
-
-```erlang
--spec do_connect(Address, Token) -> econfd:connect_result()
- when
- Address ::
- #econfd_conn_ip{} | #econfd_conn_local{},
- Token :: binary().
-```
-
-Related types: [econfd:connect\_result()](econfd.md#connect_result-0)
-
-Connect to the HA subsystem.
-
-If the port is changed it must also be changed in confd.conf To close a HA socket, use `close/1`.
-
-
-### getstatus/1
-
-```erlang
--spec getstatus(Socket) -> Result
- when
- Socket :: econfd:socket(),
- Result :: ok | {error, econfd:error_reason()}.
-```
-
-Related types: [econfd:error\_reason()](econfd.md#error_reason-0), [econfd:socket()](econfd.md#socket-0)
-
-Request status from a HA node.
-
-
-### secondary_dead/2
-
-```erlang
--spec secondary_dead(Socket, NodeId) -> Result
- when
- Socket :: econfd:socket(),
- NodeId :: econfd:value(),
- Result ::
- ok | {error, econfd:error_reason()}.
-```
-
-Related types: [econfd:error\_reason()](econfd.md#error_reason-0), [econfd:socket()](econfd.md#socket-0), [econfd:value()](econfd.md#value-0)
-
-Instruct ConfD that another node is dead.
-
-
-### slave_dead/2
-
-```erlang
--spec slave_dead(Socket, NodeId) -> Result
- when
- Socket :: econfd:socket(),
- NodeId :: econfd:value(),
- Result :: ok | {error, econfd:error_reason()}.
-```
-
-Related types: [econfd:error\_reason()](econfd.md#error_reason-0), [econfd:socket()](econfd.md#socket-0), [econfd:value()](econfd.md#value-0)
-
-Instruct ConfD that another node is dead.
-
diff --git a/developer-reference/erlang/econfd_logsyms.md b/developer-reference/erlang/econfd_logsyms.md
deleted file mode 100644
index a6886323..00000000
--- a/developer-reference/erlang/econfd_logsyms.md
+++ /dev/null
@@ -1,57 +0,0 @@
-# Module econfd_logsyms
-
-## Types
-
-### logsym/0
-
-```erlang
--type logsym() :: {LogSymStr :: string(), Descr :: string()}.
-```
-
-### logsyms/0
-
-```erlang
--type logsyms() :: tuple().
-```
-
-## Functions
-
-### array/0
-
-```erlang
--spec array() -> logsyms().
-```
-
-Related types: [logsyms()](#logsyms-0)
-
-### array/2
-
-```erlang
-array(Max, _)
-```
-
-### get_descr/1
-
-```erlang
--spec get_descr(LogSym :: integer()) -> Descr :: string().
-```
-
-### get_logsym/1
-
-```erlang
--spec get_logsym(LogSym :: integer()) -> logsym().
-```
-
-Related types: [logsym()](#logsym-0)
-
-### get_logsymstr/1
-
-```erlang
--spec get_logsymstr(LogSym :: integer()) -> LogSymStr :: string().
-```
-
-### max_sym/0
-
-```erlang
-max_sym()
-```
diff --git a/developer-reference/erlang/econfd_maapi.md b/developer-reference/erlang/econfd_maapi.md
deleted file mode 100644
index 63be4ea0..00000000
--- a/developer-reference/erlang/econfd_maapi.md
+++ /dev/null
@@ -1,2565 +0,0 @@
-# Module econfd_maapi
-
-An Erlang interface equivalent to the MAAPI C-API
-
-This modules implements the Management Agent API. All functions in this module have an equivalent function in the C library. The actual semantics of each of the API functions described here is better described in the man page confd_lib_maapi(3).
-
-
-## Types
-
-### confd_user_identification/0
-
-```erlang
--type confd_user_identification() :: #confd_user_identification{}.
-```
-
-### confd_user_info/0
-
-```erlang
--type confd_user_info() :: #confd_user_info{}.
-```
-
-### dbname/0
-
-```erlang
--type dbname() :: 0 | 1 | 2 | 3 | 4 | 6 | 7.
-```
-
-The DB name can be either
-
-* 0 = CONFD_NO_DB
-* 1 = CONFD_CANDIDATE
-* 2 = CONFD_RUNNING
-* 3 = CONFD_STARTUP
-* 4 = CONFD_OPERATIONAL
-* 6 = CONFD_PRE_COMMIT_RUNNING
-* 7 = CONFD_INTENDED
-
-Check `maapi_start_trans()` in confd_lib_maapi(3) for detailed information.
-
-
-### err/0
-
-```erlang
--type err() :: {error, {integer(), binary()}} | {error, closed}.
-```
-
-Errors can be either
-
-* \{error, Ecode::integer(), Reason::binary()\} where Ecode is one of the error codes defined in econfd_errors.hrl, and Reason is (possibly empty) textual description
-* \{error, closed\} if the socket gets closed
-
-
-### find_next_type/0
-
-```erlang
--type find_next_type() :: 0 | 1.
-```
-
-The type is used in `find_next/3` can be either
-
-* 0 = CONFD_FIND_NEXT
-* 1 = CONFD_FIND_SAME_OR_NEXT
-
-Check `maapi_find_next()` in confd_lib_maapi(3) for detailed information.
-
-
-### maapi_cursor/0
-
-```erlang
--type maapi_cursor() :: #maapi_cursor{}.
-```
-
-### proto/0
-
-```erlang
--type proto() :: 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9.
-```
-
-The protocol to start user session can be either
-
-* 0 = CONFD_PROTO_UNKNOWN
-* 1 = CONFD_PROTO_TCP
-* 2 = CONFD_PROTO_SSH
-* 3 = CONFD_PROTO_SYSTEM
-* 4 = CONFD_PROTO_CONSOLE
-* 5 = CONFD_PROTO_SSL
-* 6 = CONFD_PROTO_HTTP
-* 7 = CONFD_PROTO_HTTPS
-* 8 = CONFD_PROTO_UDP
-* 9 = CONFD_PROTO_TLS
-
-
-### read_ret/0
-
-```erlang
--type read_ret() ::
- ok |
- {ok, term()} |
- {error, {ErrorCode :: non_neg_integer(), Info :: binary()}} |
- {error, econfd:transport_error()}.
-```
-
-Related types: [econfd:transport\_error()](econfd.md#transport_error-0)
-
-### template_type/0
-
-```erlang
--type template_type() :: 0 | 1 | 2.
-```
-
-The type is used in `ncs_template_variables/3`
-
-* 0 = DEVICE_TEMPLATE - Designates device template, device template means the specific template configuration name under /ncs:devices/ncs:template.
-* 1 = SERVICE_TEMPLATE - Designates service template, service template means the specific template configuration name of template loaded from the directory templates of the package.
-* 2 = COMPLIANCE_TEMPLATE - Designates compliance template, compliance template used to verify that the configuration on a device conforms to an expected, predefined configuration, it also means the specific template configuration name under /ncs:compliance/ncs:template
-
-
-### trans_mode/0
-
-```erlang
--type trans_mode() :: read | read_write.
-```
-
-### verbosity/0
-
-```erlang
--type verbosity() :: 0 | 1 | 2 | 3.
-```
-
-The type is used in `start_span_th/7` and can be either
-
-* 0 = CONFD_PROGRESS_NORMAL
-* 1 = CONFD_PROGRESS_VERBOSE
-* 2 = CONFD_PROGRESS_VERY_VERBOSE
-* 3 = CONFD_PROGRESS_DEBUG
-
-Check `maapi_start_span_th()` in confd_lib_maapi(3) for detailed information.
-
-
-### xpath_eval_option/0
-
-```erlang
--type xpath_eval_option() ::
- {tracefun, term()} |
- {context, econfd:ikeypath()} |
- {varbindings,
- [{Name :: string(), ValueExpr :: string() | binary()}]} |
- {root, econfd:ikeypath()}.
-```
-
-Related types: [econfd:ikeypath()](econfd.md#ikeypath-0)
-
-## Functions
-
-### aaa_reload/2
-
-```erlang
--spec aaa_reload(Socket, Synchronous) -> ok | err()
- when
- Socket :: econfd:socket(),
- Synchronous :: boolean().
-```
-
-Related types: [err()](#err-0), [econfd:socket()](econfd.md#socket-0)
-
-Tell AAA to reload external AAA data.
-
-
-### abort_trans/2
-
-```erlang
--spec abort_trans(Socket, Tid) -> ok | err()
- when Socket :: econfd:socket(), Tid :: integer().
-```
-
-Related types: [err()](#err-0), [econfd:socket()](econfd.md#socket-0)
-
-Abort transaction.
-
-
-### abort_upgrade/1
-
-```erlang
--spec abort_upgrade(Socket) -> ok | err() when Socket :: econfd:socket().
-```
-
-Related types: [err()](#err-0), [econfd:socket()](econfd.md#socket-0)
-
-Abort in-service upgrade.
-
-
-### aes256_key/1
-
-```erlang
-aes256_key(Aes256Key)
-```
-
-### aes_key/2
-
-```erlang
-aes_key(AesKey, AesIVec)
-```
-
-### all_keys/2
-
-```erlang
-all_keys(Cursor, Acc)
-```
-
-### all_keys/3
-
-```erlang
--spec all_keys(Socket, Tid, IKeypath) -> Result
- when
- Socket :: econfd:socket(),
- Tid :: integer(),
- IKeypath :: econfd:ikeypath(),
- Result :: {ok, [econfd:key()]} | err().
-```
-
-Related types: [err()](#err-0), [econfd:ikeypath()](econfd.md#ikeypath-0), [econfd:key()](econfd.md#key-0), [econfd:socket()](econfd.md#socket-0)
-
-Utility function. Return all keys in a list.
-
-
-### apply_trans/3
-
-```erlang
--spec apply_trans(Socket, Tid, KeepOpen) -> ok | err()
- when
- Socket :: econfd:socket(),
- Tid :: integer(),
- KeepOpen :: boolean().
-```
-
-Related types: [err()](#err-0), [econfd:socket()](econfd.md#socket-0)
-
-Equivalent to [apply_trans(Socket, Tid, KeepOpen, 0)](#apply_trans-4).
-
-
-### apply_trans/4
-
-```erlang
--spec apply_trans(Socket, Tid, KeepOpen, Flags) -> ok | err()
- when
- Socket :: econfd:socket(),
- Tid :: integer(),
- KeepOpen :: boolean(),
- Flags :: non_neg_integer().
-```
-
-Related types: [err()](#err-0), [econfd:socket()](econfd.md#socket-0)
-
-Apply all in the transaction.
-
-This is the combination of validate/prepare/commit done in the right order.
-
-
-### attach/3
-
-```erlang
--spec attach(Socket, Ns, Tctx) -> ok | err()
- when
- Socket :: econfd:socket(),
- Ns :: econfd:namespace() | 0,
- Tctx :: econfd:confd_trans_ctx().
-```
-
-Related types: [err()](#err-0), [econfd:confd\_trans\_ctx()](econfd.md#confd_trans_ctx-0), [econfd:namespace()](econfd.md#namespace-0), [econfd:socket()](econfd.md#socket-0)
-
-Attach to a running transaction.
-
-Give NameSpace as 0 if it doesn't matter (-1 works too but is deprecated).
-
-
-### attach2/4
-
-```erlang
--spec attach2(Socket, Ns, USid, Thandle) -> ok | err()
- when
- Socket :: econfd:socket(),
- Ns :: econfd:namespace() | 0,
- USid :: integer(),
- Thandle :: integer().
-```
-
-Related types: [err()](#err-0), [econfd:namespace()](econfd.md#namespace-0), [econfd:socket()](econfd.md#socket-0)
-
-Attach to a running transaction. Give NameSpace as 0 if it doesn't matter (-1 works too but is deprecated).
-
-
-### attach_init/1
-
-```erlang
--spec attach_init(Socket) -> Result
- when
- Socket :: econfd:socket(),
- Result :: {ok, Thandle} | err().
-```
-
-Related types: [err()](#err-0), [econfd:socket()](econfd.md#socket-0)
-
-Attach to the CDB init/upgrade transaction in phase0.
-
-Returns the transaction handle to use in subsequent maapi calls on success.
-
-
-### authenticate/4
-
-```erlang
--spec authenticate(Socket, User, Pass, Groups) -> ok | err()
- when
- Socket :: econfd:socket(),
- User :: binary(),
- Pass :: binary(),
- Groups :: [binary()].
-```
-
-Related types: [err()](#err-0), [econfd:socket()](econfd.md#socket-0)
-
-Autenticate a user using ConfD AAA.
-
-
-### authenticate2/8
-
-```erlang
--spec authenticate2(Socket, User, Pass, SrcIp, SrcPort, Context, Proto,
- Groups) ->
- ok | err()
- when
- Socket :: econfd:socket(),
- User :: binary(),
- Pass :: binary(),
- SrcIp :: econfd:ip(),
- SrcPort :: non_neg_integer(),
- Context :: binary(),
- Proto :: integer(),
- Groups :: [binary()].
-```
-
-Related types: [err()](#err-0), [econfd:ip()](econfd.md#ip-0), [econfd:socket()](econfd.md#socket-0)
-
-Autenticate a user using ConfD AAA.
-
-
-### bool2int/1
-
-```erlang
-bool2int(_)
-```
-
-### candidate_abort_commit/1
-
-```erlang
--spec candidate_abort_commit(Socket) -> ok | err()
- when Socket :: econfd:socket().
-```
-
-Related types: [err()](#err-0), [econfd:socket()](econfd.md#socket-0)
-
-Equivalent to [candidate_abort_commit(Socket, <<>>)](#candidate_abort_commit-2).
-
-
-### candidate_abort_commit/2
-
-```erlang
--spec candidate_abort_commit(Socket, PersistId) -> ok | err()
- when
- Socket :: econfd:socket(),
- PersistId :: binary().
-```
-
-Related types: [err()](#err-0), [econfd:socket()](econfd.md#socket-0)
-
-Cancel persistent confirmed commit.
-
-
-### candidate_commit/1
-
-```erlang
--spec candidate_commit(Socket) -> ok | err()
- when Socket :: econfd:socket().
-```
-
-Related types: [err()](#err-0), [econfd:socket()](econfd.md#socket-0)
-
-Equivalent to [candidate_commit_info(Socket, undefined, <<>>, <<>>)](#candidate_commit_info-4).
-
-Copies candidate to running or confirms a confirmed commit.
-
-
-### candidate_commit/2
-
-```erlang
--spec candidate_commit(Socket, PersistId) -> ok | err()
- when
- Socket :: econfd:socket(),
- PersistId :: binary().
-```
-
-Related types: [err()](#err-0), [econfd:socket()](econfd.md#socket-0)
-
-Equivalent to [candidate_commit_info(Socket, PersistId, <<>>, <<>>)](#candidate_commit_info-4).
-
-Confirms persistent confirmed commit.
-
-
-### candidate_commit_info/3
-
-```erlang
--spec candidate_commit_info(Socket, Label, Comment) -> ok | err()
- when
- Socket :: econfd:socket(),
- Label :: binary(),
- Comment :: binary().
-```
-
-Related types: [err()](#err-0), [econfd:socket()](econfd.md#socket-0)
-
-Equivalent to [candidate_commit_info(Socket, undefined, Label, Comment)](#candidate_commit_info-4).
-
-Like `candidate_commit/1`, but set the "Label" and/or "Comment" that is stored in the rollback file when the candidate is committed to running.
-
-To set only the "Label", give Comment as an empty binary, and to set only the "Comment", give Label as an empty binary.
-
-Note: To ensure that the "Label" and/or "Comment" are stored in the rollback file in all cases when doing a confirmed commit, they must be given both with the confirmed commit (using `candidate_confirmed_commit_info/4`) and with the confirming commit (using this function).
-
-
-### candidate_commit_info/4
-
-```erlang
--spec candidate_commit_info(Socket, PersistId, Label, Comment) ->
- ok | err()
- when
- Socket :: econfd:socket(),
- PersistId :: binary() | undefined,
- Label :: binary(),
- Comment :: binary().
-```
-
-Related types: [err()](#err-0), [econfd:socket()](econfd.md#socket-0)
-
-Combines `candidate_commit/2` and `candidate_commit_info/3` \- set "Label" and/or "Comment" when confirming a persistent confirmed commit.
-
-Note: To ensure that the "Label" and/or "Comment" are stored in the rollback file in all cases when doing a confirmed commit, they must be given both with the confirmed commit (using `candidate_confirmed_commit_info/6`) and with the confirming commit (using this function).
-
-
-### candidate_confirmed_commit/2
-
-```erlang
--spec candidate_confirmed_commit(Socket, TimeoutSecs) -> ok | err()
- when
- Socket :: econfd:socket(),
- TimeoutSecs :: integer().
-```
-
-Related types: [err()](#err-0), [econfd:socket()](econfd.md#socket-0)
-
-Equivalent to [candidate_confirmed_commit_info(Socket, TimeoutSecs, undefined, undefined, <<>>, <<>>)](#candidate_confirmed_commit_info-6).
-
-Copy candidate into running, but rollback if not confirmed by a call of `candidate_commit/1`.
-
-
-### candidate_confirmed_commit/4
-
-```erlang
--spec candidate_confirmed_commit(Socket, TimeoutSecs, Persist,
- PersistId) ->
- ok | err()
- when
- Socket :: econfd:socket(),
- TimeoutSecs :: integer(),
- Persist :: binary() | undefined,
- PersistId ::
- binary() | undefined.
-```
-
-Related types: [err()](#err-0), [econfd:socket()](econfd.md#socket-0)
-
-Equivalent to [candidate_confirmed_commit_info(Socket, TimeoutSecs, Persist, PersistId, <<>>, <<>>)](#candidate_confirmed_commit_info-6).
-
-Starts or extends persistent confirmed commit.
-
-
-### candidate_confirmed_commit_info/4
-
-```erlang
--spec candidate_confirmed_commit_info(Socket, TimeoutSecs, Label,
- Comment) ->
- ok | err()
- when
- Socket :: econfd:socket(),
- TimeoutSecs :: integer(),
- Label :: binary(),
- Comment :: binary().
-```
-
-Related types: [err()](#err-0), [econfd:socket()](econfd.md#socket-0)
-
-Equivalent to [candidate_confirmed_commit_info(Socket, TimeoutSecs, undefined, undefined, Label, Comment)](#candidate_confirmed_commit_info-6).
-
-Like `candidate_confirmed_commit/2`, but set the "Label" and/or "Comment" that is stored in the rollback file when the candidate is committed to running.
-
-To set only the "Label", give Comment as an empty binary, and to set only the "Comment", give Label as an empty binary.
-
-Note: To ensure that the "Label" and/or "Comment" are stored in the rollback file in all cases when doing a confirmed commit, they must be given both with the confirmed commit (using this function) and with the confirming commit (using `candidate_commit_info/3`).
-
-
-### candidate_confirmed_commit_info/6
-
-```erlang
--spec candidate_confirmed_commit_info(Socket, TimeoutSecs, Persist,
- PersistId, Label, Comment) ->
- ok | err()
- when
- Socket :: econfd:socket(),
- TimeoutSecs :: integer(),
- Persist ::
- binary() | undefined,
- PersistId ::
- binary() | undefined,
- Label :: binary(),
- Comment :: binary().
-```
-
-Related types: [err()](#err-0), [econfd:socket()](econfd.md#socket-0)
-
-Combines `candidate_confirmed_commit/4` and `candidate_confirmed_commit_info/4` \- set "Label" and/or "Comment" when starting or extending a persistent confirmed commit.
-
-Note: To ensure that the "Label" and/or "Comment" are stored in the rollback file in all cases when doing a confirmed commit, they must be given both with the confirmed commit (using this function) and with the confirming commit (using `candidate_commit_info/4`).
-
-
-### candidate_reset/1
-
-```erlang
--spec candidate_reset(Socket) -> ok | err()
- when Socket :: econfd:socket().
-```
-
-Related types: [err()](#err-0), [econfd:socket()](econfd.md#socket-0)
-
-Copy running into candidate.
-
-
-### candidate_validate/1
-
-```erlang
--spec candidate_validate(Socket) -> ok | err()
- when Socket :: econfd:socket().
-```
-
-Related types: [err()](#err-0), [econfd:socket()](econfd.md#socket-0)
-
-Validate the candidate config.
-
-
-### cli_prompt/4
-
-```erlang
--spec cli_prompt(Socket, USid, Prompt, Echo) -> {ok, binary()} | err()
- when
- Socket :: econfd:socket(),
- USid :: integer(),
- Prompt :: binary(),
- Echo :: boolean().
-```
-
-Related types: [err()](#err-0), [econfd:socket()](econfd.md#socket-0)
-
-Prompt CLI user for a reply.
-
-
-### cli_prompt/5
-
-```erlang
--spec cli_prompt(Socket, USid, Prompt, Echo, Timeout) ->
- {ok, binary()} | err()
- when
- Socket :: econfd:socket(),
- USid :: integer(),
- Prompt :: binary(),
- Echo :: boolean(),
- Timeout :: non_neg_integer().
-```
-
-Related types: [err()](#err-0), [econfd:socket()](econfd.md#socket-0)
-
-Prompt CLI user for a reply - return error if no reply is received within Timeout seconds.
-
-
-### cli_prompt_oneof/4
-
-```erlang
--spec cli_prompt_oneof(Socket, USid, Prompt, Choice) ->
- {ok, binary()} | err()
- when
- Socket :: econfd:socket(),
- USid :: integer(),
- Prompt :: binary(),
- Choice :: binary().
-```
-
-Related types: [err()](#err-0), [econfd:socket()](econfd.md#socket-0)
-
-Prompt CLI user for a reply.
-
-
-### cli_prompt_oneof/5
-
-```erlang
--spec cli_prompt_oneof(Socket, USid, Prompt, Choice, Timeout) ->
- {ok, binary()} | err()
- when
- Socket :: econfd:socket(),
- USid :: integer(),
- Prompt :: binary(),
- Choice :: binary(),
- Timeout :: non_neg_integer().
-```
-
-Related types: [err()](#err-0), [econfd:socket()](econfd.md#socket-0)
-
-Prompt CLI user for a reply - return error if no reply is received within Timeout seconds.
-
-
-### cli_read_eof/3
-
-```erlang
--spec cli_read_eof(Socket, USid, Echo) -> {ok, binary()} | err()
- when
- Socket :: econfd:socket(),
- USid :: integer(),
- Echo :: boolean().
-```
-
-Related types: [err()](#err-0), [econfd:socket()](econfd.md#socket-0)
-
-Read data from CLI until EOF.
-
-
-### cli_read_eof/4
-
-```erlang
--spec cli_read_eof(Socket, USid, Echo, Timeout) ->
- {ok, binary()} | err()
- when
- Socket :: econfd:socket(),
- USid :: integer(),
- Echo :: boolean(),
- Timeout :: non_neg_integer().
-```
-
-Related types: [err()](#err-0), [econfd:socket()](econfd.md#socket-0)
-
-Read data from CLI until EOF - return error if no reply is received within Timeout seconds.
-
-
-### cli_write/3
-
-```erlang
--spec cli_write(Socket, USid, Message) -> ok | err()
- when
- Socket :: econfd:socket(),
- USid :: integer(),
- Message :: binary().
-```
-
-Related types: [err()](#err-0), [econfd:socket()](econfd.md#socket-0)
-
-Write mesage to the CLI.
-
-
-### close/1
-
-```erlang
--spec close(Socket) -> Result
- when
- Socket :: econfd:socket(),
- Result :: ok | {error, econfd:error_reason()}.
-```
-
-Related types: [econfd:error\_reason()](econfd.md#error_reason-0), [econfd:socket()](econfd.md#socket-0)
-
-Close socket.
-
-
-### commit_trans/2
-
-```erlang
--spec commit_trans(Socket, Tid) -> ok | err()
- when Socket :: econfd:socket(), Tid :: integer().
-```
-
-Related types: [err()](#err-0), [econfd:socket()](econfd.md#socket-0)
-
-Commit a transaction.
-
-
-### commit_upgrade/1
-
-```erlang
--spec commit_upgrade(Socket) -> ok | err()
- when Socket :: econfd:socket().
-```
-
-Related types: [err()](#err-0), [econfd:socket()](econfd.md#socket-0)
-
-Commit in-service upgrade.
-
-
-### confirmed_commit_in_progress/1
-
-```erlang
--spec confirmed_commit_in_progress(Socket) -> Result
- when
- Socket :: econfd:socket(),
- Result ::
- {ok, boolean()} | err().
-```
-
-Related types: [err()](#err-0), [econfd:socket()](econfd.md#socket-0)
-
-Is a confirmed commit in progress.
-
-
-### connect/1
-
-```erlang
--spec connect(Path) -> econfd:connect_result() when Path :: string().
-```
-
-Related types: [econfd:connect\_result()](econfd.md#connect_result-0)
-
-Connect a maapi socket to ConfD.
-
-
-### connect/2
-
-```erlang
--spec connect(Address, Port) -> econfd:connect_result()
- when Address :: econfd:ip(), Port :: non_neg_integer().
-```
-
-Related types: [econfd:connect\_result()](econfd.md#connect_result-0), [econfd:ip()](econfd.md#ip-0)
-
-Connect a maapi socket to ConfD.
-
-
-### copy/3
-
-```erlang
--spec copy(Socket, FromTH, ToTH) -> ok | err()
- when
- Socket :: econfd:socket(),
- FromTH :: integer(),
- ToTH :: integer().
-```
-
-Related types: [err()](#err-0), [econfd:socket()](econfd.md#socket-0)
-
-Copy data from one transaction to another.
-
-
-### copy_running_to_startup/1
-
-```erlang
--spec copy_running_to_startup(Socket) -> ok | err()
- when Socket :: econfd:socket().
-```
-
-Related types: [err()](#err-0), [econfd:socket()](econfd.md#socket-0)
-
-Copy running to startup.
-
-
-### copy_tree/4
-
-```erlang
--spec copy_tree(Socket, Tid, FromIKeypath, ToIKeypath) -> ok | err()
- when
- Socket :: econfd:socket(),
- Tid :: integer(),
- FromIKeypath :: econfd:ikeypath(),
- ToIKeypath :: econfd:ikeypath().
-```
-
-Related types: [err()](#err-0), [econfd:ikeypath()](econfd.md#ikeypath-0), [econfd:socket()](econfd.md#socket-0)
-
-Copy an entire subtree in the configuration from one point to another.
-
-
-### create/3
-
-```erlang
--spec create(Socket, Tid, IKeypath) -> ok | err()
- when
- Socket :: econfd:socket(),
- Tid :: integer(),
- IKeypath :: econfd:ikeypath().
-```
-
-Related types: [err()](#err-0), [econfd:ikeypath()](econfd.md#ikeypath-0), [econfd:socket()](econfd.md#socket-0)
-
-Create a new element.
-
-
-### delete/3
-
-```erlang
--spec delete(Socket, Tid, IKeypath) -> ok | err()
- when
- Socket :: econfd:socket(),
- Tid :: integer(),
- IKeypath :: econfd:ikeypath().
-```
-
-Related types: [err()](#err-0), [econfd:ikeypath()](econfd.md#ikeypath-0), [econfd:socket()](econfd.md#socket-0)
-
-Delete an element.
-
-
-### delete_config/2
-
-```erlang
--spec delete_config(Socket, DbName) -> ok | err()
- when
- Socket :: econfd:socket(), DbName :: dbname().
-```
-
-Related types: [dbname()](#dbname-0), [err()](#err-0), [econfd:socket()](econfd.md#socket-0)
-
-Delete all data from a data store.
-
-
-### des_key/4
-
-```erlang
-des_key(DesKey1, DesKey2, DesKey3, DesIVec)
-```
-
-### detach/2
-
-```erlang
--spec detach(Socket, Thandle) -> ok | err()
- when Socket :: econfd:socket(), Thandle :: integer().
-```
-
-Related types: [err()](#err-0), [econfd:socket()](econfd.md#socket-0)
-
-Detach from the transaction.
-
-
-### diff_iterate/4
-
-```erlang
-diff_iterate(Sock, Tid, Fun, InitState)
-```
-
-Equivalent to [diff_iterate(Sock, Tid, Fun, 0, InitState)](#diff_iterate-5).
-
-
-### diff_iterate/5
-
-```erlang
--spec diff_iterate(Socket, Tid, Fun, Flags, State) -> Result
- when
- Socket :: econfd:socket(),
- Tid :: integer(),
- Fun ::
- fun((IKeypath, Op, OldValue, Value, State) ->
- {ok, Ret, State} | {error, term()}),
- Flags :: non_neg_integer(),
- State :: term(),
- Result :: {ok, State} | {error, term()}.
-```
-
-Related types: [econfd:socket()](econfd.md#socket-0)
-
-Iterate through a diff.
-
-This function is used in combination with the notifications API where we get a chance to iterate through the diff of a transaction just before it gets commited. The transaction hangs until we have called `econfd_notif:notification_done/2`. The function can also be called from within validate() callbacks to traverse a diff while validating. Currently OldValue is always the atom 'undefined'. When Op == ?MOP_MOVED_AFTER (only for "ordered-by user" list entry), Value == \{\} means that the entry was moved first in the list, otherwise Value is a econfd:key() tuple that identifies the entry it was moved after.
-
-
-### do_connect/1
-
-```erlang
-do_connect(SockAddr)
-```
-
-### end_progress_span/3
-
-```erlang
--spec end_progress_span(Socket, SpanId1, Annotation) -> Result
- when
- Socket :: econfd:socket(),
- SpanId1 :: binary(),
- Annotation :: iolist(),
- Result ::
- {ok,
- {SpanId2 :: binary() | undefined,
- TraceId :: binary() | undefined}}.
-```
-
-Related types: [econfd:socket()](econfd.md#socket-0)
-
-### end_user_session/1
-
-```erlang
--spec end_user_session(Socket) -> ok | err()
- when Socket :: econfd:socket().
-```
-
-Related types: [err()](#err-0), [econfd:socket()](econfd.md#socket-0)
-
-Ends a user session.
-
-
-### exists/3
-
-```erlang
--spec exists(Socket, Tid, IKeypath) -> Result
- when
- Socket :: econfd:socket(),
- Tid :: integer(),
- IKeypath :: econfd:ikeypath(),
- Result :: {ok, boolean()} | err().
-```
-
-Related types: [err()](#err-0), [econfd:ikeypath()](econfd.md#ikeypath-0), [econfd:socket()](econfd.md#socket-0)
-
-Check if an element exists.
-
-
-### find_next/3
-
-```erlang
--spec find_next(Cursor, Type, Key) -> Result
- when
- Cursor :: maapi_cursor(),
- Type :: find_next_type(),
- Key :: econfd:key(),
- Result ::
- {ok, econfd:key(), Cursor} | done | err().
-```
-
-Related types: [err()](#err-0), [find\_next\_type()](#find_next_type-0), [maapi\_cursor()](#maapi_cursor-0), [econfd:key()](econfd.md#key-0)
-
-find the list entry matching Type and Key.
-
-
-### finish_trans/2
-
-```erlang
--spec finish_trans(Socket, Tid) -> ok | err()
- when Socket :: econfd:socket(), Tid :: integer().
-```
-
-Related types: [err()](#err-0), [econfd:socket()](econfd.md#socket-0)
-
-Finish a transaction.
-
-
-### get_attrs/4
-
-```erlang
--spec get_attrs(Socket, Tid, IKeypath, AttrList) -> Result
- when
- Socket :: econfd:socket(),
- Tid :: integer(),
- IKeypath :: econfd:ikeypath(),
- AttrList :: [Attr],
- Result :: {ok, [{Attr, Value}]} | err().
-```
-
-Related types: [err()](#err-0), [econfd:ikeypath()](econfd.md#ikeypath-0), [econfd:socket()](econfd.md#socket-0)
-
-Get the selected attributes for an element.
-
-Calling with an empty attribute list returns all attributes.
-
-
-### get_authorization_info/2
-
-```erlang
--spec get_authorization_info(Socket, USid) -> Result
- when
- Socket :: econfd:socket(),
- USid :: integer(),
- Result :: {ok, Info} | err().
-```
-
-Related types: [err()](#err-0), [econfd:socket()](econfd.md#socket-0)
-
-Get authorization info for a user session.
-
-
-### get_case/4
-
-```erlang
--spec get_case(Socket, Tid, IKeypath, Choice) -> Result
- when
- Socket :: econfd:socket(),
- Tid :: integer(),
- IKeypath :: econfd:ikeypath(),
- Choice :: econfd:qtag() | [econfd:qtag()],
- Result :: {ok, Case} | err().
-```
-
-Related types: [err()](#err-0), [econfd:ikeypath()](econfd.md#ikeypath-0), [econfd:qtag()](econfd.md#qtag-0), [econfd:socket()](econfd.md#socket-0)
-
-Get the current case for a choice.
-
-
-### get_elem/3
-
-```erlang
--spec get_elem(Socket, Tid, IKeypath) -> Result
- when
- Socket :: econfd:socket(),
- Tid :: integer(),
- IKeypath :: econfd:ikeypath(),
- Result :: {ok, econfd:value()} | err().
-```
-
-Related types: [err()](#err-0), [econfd:ikeypath()](econfd.md#ikeypath-0), [econfd:socket()](econfd.md#socket-0), [econfd:value()](econfd.md#value-0)
-
-Read an element.
-
-
-### get_elem_no_defaults/3
-
-```erlang
--spec get_elem_no_defaults(Socket, Tid, IKeypath) -> Result
- when
- Socket :: econfd:socket(),
- Tid :: integer(),
- IKeypath :: econfd:ikeypath(),
- Result :: {ok, Value} | err().
-```
-
-Related types: [err()](#err-0), [econfd:ikeypath()](econfd.md#ikeypath-0), [econfd:socket()](econfd.md#socket-0)
-
-Read an element, but return 'default' instead of the value if the default value is in effect.
-
-
-### get_mode/2
-
-```erlang
--spec get_mode(Socket, Tid) -> {ok, trans_mode() | -1}
- when Socket :: econfd:socket(), Tid :: integer().
-```
-
-Related types: [trans\_mode()](#trans_mode-0), [econfd:socket()](econfd.md#socket-0)
-
-Get the mode for the given transaction.
-
-
-### get_my_user_session_id/1
-
-```erlang
--spec get_my_user_session_id(Socket) -> Result
- when
- Socket :: econfd:socket(),
- Result :: {ok, USid} | err().
-```
-
-Related types: [err()](#err-0), [econfd:socket()](econfd.md#socket-0)
-
-Get my user session id.
-
-
-### get_next/1
-
-```erlang
--spec get_next(Cursor) -> Result
- when
- Cursor :: maapi_cursor(),
- Result ::
- {ok, econfd:key(), Cursor} | done | err().
-```
-
-Related types: [err()](#err-0), [maapi\_cursor()](#maapi_cursor-0), [econfd:key()](econfd.md#key-0)
-
-iterate through the entries of a list.
-
-
-### get_object/3
-
-```erlang
--spec get_object(Socket, Tid, IKeypath) -> Result
- when
- Socket :: econfd:socket(),
- Tid :: integer(),
- IKeypath :: econfd:ikeypath(),
- Result :: {ok, [econfd:value()]} | err().
-```
-
-Related types: [err()](#err-0), [econfd:ikeypath()](econfd.md#ikeypath-0), [econfd:socket()](econfd.md#socket-0), [econfd:value()](econfd.md#value-0)
-
-Read all the values in a container or list entry.
-
-
-### get_objects/2
-
-```erlang
--spec get_objects(Cursor, NumEntries) -> Result
- when
- Cursor :: maapi_cursor(),
- NumEntries :: integer(),
- Result ::
- {ok, Cursor, Values} |
- {done, Values} |
- err().
-```
-
-Related types: [err()](#err-0), [maapi\_cursor()](#maapi_cursor-0)
-
-Read all the values for NumEntries list entries, starting at the point given by the cursor C.
-
-The return value has one Erlang list for each YANG list entry, i.e. it is a list of at most NumEntries lists. If we reached the end of the YANG list, \{done, Values\} is returned, and there will be fewer than NumEntries lists in Values - otherwise \{ok, C2, Values\} is returned, where C2 can be used to continue the traversal.
-
-
-### get_rollback_id/2
-
-```erlang
--spec get_rollback_id(Socket, Tid) -> non_neg_integer() | -1
- when
- Socket :: econfd:socket(), Tid :: integer().
-```
-
-Related types: [econfd:socket()](econfd.md#socket-0)
-
-Get rollback id of commited transaction.
-
-
-### get_running_db_status/1
-
-```erlang
--spec get_running_db_status(Socket) -> Result
- when
- Socket :: econfd:socket(),
- Result :: {ok, Status} | err().
-```
-
-Related types: [err()](#err-0), [econfd:socket()](econfd.md#socket-0)
-
-Get the "running status".
-
-
-### get_user_session/2
-
-```erlang
--spec get_user_session(Socket, USid) -> Result
- when
- Socket :: econfd:socket(),
- USid :: integer(),
- Result :: {ok, confd_user_info()} | err().
-```
-
-Related types: [confd\_user\_info()](#confd_user_info-0), [err()](#err-0), [econfd:socket()](econfd.md#socket-0)
-
-Get session info for a user session.
-
-
-### get_user_sessions/1
-
-```erlang
--spec get_user_sessions(Socket) -> Result
- when
- Socket :: econfd:socket(),
- Result :: {ok, [USid]} | err().
-```
-
-Related types: [err()](#err-0), [econfd:socket()](econfd.md#socket-0)
-
-Get all user sessions.
-
-
-### get_values/4
-
-```erlang
--spec get_values(Socket, Tid, IKeypath, Values) -> Result
- when
- Socket :: econfd:socket(),
- Tid :: integer(),
- IKeypath :: econfd:ikeypath(),
- Values :: [econfd:tagval()],
- Result :: {ok, [econfd:tagval()]} | err().
-```
-
-Related types: [err()](#err-0), [econfd:ikeypath()](econfd.md#ikeypath-0), [econfd:socket()](econfd.md#socket-0), [econfd:tagval()](econfd.md#tagval-0)
-
-Read the values for the leafs that have the "value" 'not_found' in the Values list.
-
-This can be used to read an arbitrary set of sub-elements of a container or list entry. The return value is a list of the same length as Values, i.e. the requested leafs are in the same position in the returned list as in the Values argument. The elements in the returned list are always "canonical" though, i.e. of the form [`econfd:tagval()`](econfd.md#tagval-0).
-
-
-### hide_group/3
-
-```erlang
--spec hide_group(Socket, Tid, GroupName) -> ok | err()
- when
- Socket :: econfd:socket(),
- Tid :: integer(),
- GroupName :: binary().
-```
-
-Related types: [err()](#err-0), [econfd:socket()](econfd.md#socket-0)
-
-Do hide a hide group.
-
-Hide all nodes belonging to a hide group in a transaction that started with flag FLAG_HIDE_ALL_HIDEGROUPS.
-
-
-### hkeypath2ikeypath/2
-
-```erlang
--spec hkeypath2ikeypath(Socket, HKeypath) -> Result
- when
- Socket :: econfd:socket(),
- HKeypath :: [non_neg_integer()],
- Result :: {ok, IKeypath} | err().
-```
-
-Related types: [err()](#err-0), [econfd:socket()](econfd.md#socket-0)
-
-Convert a hkeypath to an ikeypath.
-
-
-### ibool/1
-
-```erlang
-ibool(X)
-```
-
-### init_cursor/3
-
-```erlang
--spec init_cursor(Socket, Tid, IKeypath) -> maapi_cursor()
- when
- Socket :: econfd:socket(),
- Tid :: integer(),
- IKeypath :: econfd:ikeypath().
-```
-
-Related types: [maapi\_cursor()](#maapi_cursor-0), [econfd:ikeypath()](econfd.md#ikeypath-0), [econfd:socket()](econfd.md#socket-0)
-
-Equivalent to [init_cursor(Socket, Tik, IKeypath, undefined)](#init_cursor-4).
-
-
-### init_cursor/4
-
-```erlang
--spec init_cursor(Socket, Tid, IKeypath, XPath) -> maapi_cursor()
- when
- Socket :: econfd:socket(),
- Tid :: integer(),
- IKeypath :: econfd:ikeypath(),
- XPath :: undefined | binary() | string().
-```
-
-Related types: [maapi\_cursor()](#maapi_cursor-0), [econfd:ikeypath()](econfd.md#ikeypath-0), [econfd:socket()](econfd.md#socket-0)
-
-Initalize a get_next() cursor.
-
-
-### init_upgrade/3
-
-```erlang
--spec init_upgrade(Socket, TimeoutSecs, Flags) -> ok | err()
- when
- Socket :: econfd:socket(),
- TimeoutSecs :: integer(),
- Flags :: non_neg_integer().
-```
-
-Related types: [err()](#err-0), [econfd:socket()](econfd.md#socket-0)
-
-Start in-service upgrade.
-
-
-### insert/3
-
-```erlang
--spec insert(Socket, Tid, IKeypath) -> ok | err()
- when
- Socket :: econfd:socket(),
- Tid :: integer(),
- IKeypath :: econfd:ikeypath().
-```
-
-Related types: [err()](#err-0), [econfd:ikeypath()](econfd.md#ikeypath-0), [econfd:socket()](econfd.md#socket-0)
-
-Insert an entry in an integer-keyed list.
-
-
-### install_crypto_keys/1
-
-```erlang
--spec install_crypto_keys(Socket) -> ok | err()
- when Socket :: econfd:socket().
-```
-
-Related types: [err()](#err-0), [econfd:socket()](econfd.md#socket-0)
-
-Fetch keys for the encrypted data types from the server.
-
-Encrypted data type can be tailf:aes-cfb-128-encrypted-string and tailf:aes-256-cfb-128-encrypted-string.
-
-
-### is_candidate_modified/1
-
-```erlang
--spec is_candidate_modified(Socket) -> Result
- when
- Socket :: econfd:socket(),
- Result :: {ok, boolean()} | err().
-```
-
-Related types: [err()](#err-0), [econfd:socket()](econfd.md#socket-0)
-
-Check if candidate has been modified.
-
-
-### is_lock_set/2
-
-```erlang
--spec is_lock_set(Socket, DbName) -> Result
- when
- Socket :: econfd:socket(),
- DbName :: dbname(),
- Result :: {ok, integer()} | err().
-```
-
-Related types: [dbname()](#dbname-0), [err()](#err-0), [econfd:socket()](econfd.md#socket-0)
-
-Check if a db is locked or not.
-
-Return 0 or the Usid of the lock owner.
-
-
-### is_running_modified/1
-
-```erlang
--spec is_running_modified(Socket) -> Result
- when
- Socket :: econfd:socket(),
- Result :: {ok, boolean()} | err().
-```
-
-Related types: [err()](#err-0), [econfd:socket()](econfd.md#socket-0)
-
-Check if running has been modified since the last copy to startup was done.
-
-
-### iterate/6
-
-```erlang
--spec iterate(Socket, Tid, IKeypath, Fun, Flags, State) -> Result
- when
- Socket :: econfd:socket(),
- Tid :: integer(),
- IKeypath :: econfd:ikeypath(),
- Fun ::
- fun((IKeypath, Value, Attrs, State) ->
- {ok, Ret, State} | {error, term()}),
- Flags :: non_neg_integer(),
- State :: term(),
- Result :: {ok, State} | {error, term()}.
-```
-
-Related types: [econfd:ikeypath()](econfd.md#ikeypath-0), [econfd:socket()](econfd.md#socket-0)
-
-Iterate over all the data in the transaction and the underlying data store.
-
-Flags can be given as ?MAAPI_ITER_WANT_ATTR to request that attributes (if any) are passed to the Fun, otherwise it should be 0. The possible values for Ret in the return value for Fun are the same as for `diff_iterate/5`.
-
-
-### iterate_result/3
-
-```erlang
-iterate_result(Sock, Fun, _)
-```
-
-### keypath_diff_iterate/5
-
-```erlang
--spec keypath_diff_iterate(Socket, Tid, IKeypath, Fun, State) -> Result
- when
- Socket :: econfd:socket(),
- Tid :: integer(),
- IKeypath :: econfd:ikeypath(),
- Fun ::
- fun((IKeypath, Op, OldValue,
- Value, State) ->
- {ok, Ret, State} |
- {error, term()}),
- State :: term(),
- Result ::
- {ok, State} | {error, term()}.
-```
-
-Related types: [econfd:ikeypath()](econfd.md#ikeypath-0), [econfd:socket()](econfd.md#socket-0)
-
-Iterate through a diff.
-
-This function behaves like `diff_iterate/5` with the exception that the provided keypath IKP, prunes the tree and only diffs below that path are considered.
-
-
-### keypath_diff_iterate/6
-
-```erlang
-keypath_diff_iterate(Sock, Tid, IKP, Fun, Flags, InitState)
-```
-
-### kill_user_session/2
-
-```erlang
--spec kill_user_session(Socket, USid) -> ok | err()
- when
- Socket :: econfd:socket(),
- USid :: integer().
-```
-
-Related types: [err()](#err-0), [econfd:socket()](econfd.md#socket-0)
-
-Kill a user session.
-
-
-### lock/2
-
-```erlang
--spec lock(Socket, DbName) -> ok | err()
- when Socket :: econfd:socket(), DbName :: dbname().
-```
-
-Related types: [dbname()](#dbname-0), [err()](#err-0), [econfd:socket()](econfd.md#socket-0)
-
-Lock a database.
-
-
-### lock_partial/3
-
-```erlang
--spec lock_partial(Socket, DbName, XPath) -> Result
- when
- Socket :: econfd:socket(),
- DbName :: dbname(),
- XPath :: [binary()],
- Result :: {ok, LockId} | err().
-```
-
-Related types: [dbname()](#dbname-0), [err()](#err-0), [econfd:socket()](econfd.md#socket-0)
-
-Request a partial lock on a database.
-
-The set of nodes to lock is specified as a list of XPath expressions.
-
-
-### mk_uident/1
-
-```erlang
-mk_uident(UId)
-```
-
-### move/4
-
-```erlang
--spec move(Socket, Tid, IKeypath, ToKey) -> ok | err()
- when
- Socket :: econfd:socket(),
- Tid :: integer(),
- IKeypath :: econfd:ikeypath(),
- ToKey :: econfd:key().
-```
-
-Related types: [err()](#err-0), [econfd:ikeypath()](econfd.md#ikeypath-0), [econfd:key()](econfd.md#key-0), [econfd:socket()](econfd.md#socket-0)
-
-Move (rename) an entry in a list.
-
-
-### move_ordered/4
-
-```erlang
--spec move_ordered(Socket, Tid, IKeypath, To) -> ok | err()
- when
- Socket :: econfd:socket(),
- Tid :: integer(),
- IKeypath :: econfd:ikeypath(),
- To ::
- first | last |
- {before | 'after', econfd:key()}.
-```
-
-Related types: [err()](#err-0), [econfd:ikeypath()](econfd.md#ikeypath-0), [econfd:key()](econfd.md#key-0), [econfd:socket()](econfd.md#socket-0)
-
-Move an entry in an "ordered-by user" list.
-
-
-### ncs_apply_template/7
-
-```erlang
--spec ncs_apply_template(Socket, Tid, TemplateName, RootIKeypath,
- Variables, Documents, Shared) ->
- ok | err()
- when
- Socket :: econfd:socket(),
- Tid :: integer(),
- TemplateName :: binary(),
- RootIKeypath :: econfd:ikeypath(),
- Variables :: term(),
- Documents :: term(),
- Shared :: boolean().
-```
-
-Related types: [err()](#err-0), [econfd:ikeypath()](econfd.md#ikeypath-0), [econfd:socket()](econfd.md#socket-0)
-
-Apply a template that has been loaded into NCS.
-
-The TemplateName parameter gives the name of the template. The Variables parameter is a list of variables and names for substitution into the template.
-
-
-### ncs_apply_trans_params/4
-
-```erlang
--spec ncs_apply_trans_params(Socket, Tid, KeepOpen, Params) -> Result
- when
- Socket :: econfd:socket(),
- Tid :: integer(),
- KeepOpen :: boolean(),
- Params :: [econfd:tagval()],
- Result ::
- ok |
- {ok, [econfd:tagval()]} |
- err().
-```
-
-Related types: [err()](#err-0), [econfd:socket()](econfd.md#socket-0), [econfd:tagval()](econfd.md#tagval-0)
-
-Apply transaction with commit parameters.
-
-This is a version of apply_trans that takes commit parameters in form of a list of tagged values according to the input parameters for rpc prepare-transaction as defined in tailf-netconf-ncs.yang module. The result of this function may include a list of tagged values according to the output parameters of rpc prepare-transaction or output parameters of rpc commit-transaction as defined in tailf-netconf-ncs.yang module.
-
-
-### ncs_get_trans_params/2
-
-```erlang
--spec ncs_get_trans_params(Socket, Tid) -> Result
- when
- Socket :: econfd:socket(),
- Tid :: integer(),
- Result ::
- {ok, [econfd:tagval()]} | err().
-```
-
-Related types: [err()](#err-0), [econfd:socket()](econfd.md#socket-0), [econfd:tagval()](econfd.md#tagval-0)
-
-Get transaction commit parameters.
-
-
-### ncs_template_variables/2
-
-```erlang
--spec ncs_template_variables(Socket, TemplateName) ->
- {ok, binary()} | err()
- when
- Socket :: econfd:socket(),
- TemplateName :: binary().
-```
-
-Related types: [err()](#err-0), [econfd:socket()](econfd.md#socket-0)
-
-Retrieve the variables used in a template.
-
-
-### ncs_template_variables/3
-
-```erlang
--spec ncs_template_variables(Socket, TemplateName, Type) ->
- {ok, binary()} | err()
- when
- Socket :: econfd:socket(),
- TemplateName :: string(),
- Type :: template_type().
-```
-
-Related types: [err()](#err-0), [template\_type()](#template_type-0), [econfd:socket()](econfd.md#socket-0)
-
-Retrieve the variables used in a template.
-
-
-### ncs_templates/1
-
-```erlang
--spec ncs_templates(Socket) -> {ok, binary()} | err()
- when Socket :: econfd:socket().
-```
-
-Related types: [err()](#err-0), [econfd:socket()](econfd.md#socket-0)
-
-Retrieve a list of the templates currently loaded into NCS.
-
-
-### ncs_write_service_log_entry/5
-
-```erlang
--spec ncs_write_service_log_entry(Socket, IKeypath, Message, Type,
- Level) ->
- ok | err()
- when
- Socket :: econfd:socket(),
- IKeypath :: econfd:ikeypath(),
- Message :: string(),
- Type :: econfd:value(),
- Level :: econfd:value().
-```
-
-Related types: [err()](#err-0), [econfd:ikeypath()](econfd.md#ikeypath-0), [econfd:socket()](econfd.md#socket-0), [econfd:value()](econfd.md#value-0)
-
-Write a service log entry.
-
-
-### netconf_ssh_call_home/3
-
-```erlang
--spec netconf_ssh_call_home(Socket, Host, Port) -> ok | err()
- when
- Socket :: econfd:socket(),
- Host :: econfd:ip() | string(),
- Port :: non_neg_integer().
-```
-
-Related types: [err()](#err-0), [econfd:ip()](econfd.md#ip-0), [econfd:socket()](econfd.md#socket-0)
-
-### netconf_ssh_call_home_opaque/4
-
-```erlang
--spec netconf_ssh_call_home_opaque(Socket, Host, Opaque, Port) ->
- ok | err()
- when
- Socket :: econfd:socket(),
- Host :: econfd:ip() | string(),
- Opaque :: string(),
- Port :: non_neg_integer().
-```
-
-Related types: [err()](#err-0), [econfd:ip()](econfd.md#ip-0), [econfd:socket()](econfd.md#socket-0)
-
-### num_instances/3
-
-```erlang
--spec num_instances(Socket, Tid, IKeypath) -> Result
- when
- Socket :: econfd:socket(),
- Tid :: non_neg_integer(),
- IKeypath :: econfd:ikeypath(),
- Result :: {ok, integer()} | err().
-```
-
-Related types: [err()](#err-0), [econfd:ikeypath()](econfd.md#ikeypath-0), [econfd:socket()](econfd.md#socket-0)
-
-Find the number of entries in a list.
-
-
-### perform_upgrade/2
-
-```erlang
--spec perform_upgrade(Socket, LoadPathList) -> ok | err()
- when
- Socket :: econfd:socket(),
- LoadPathList :: [binary()].
-```
-
-Related types: [err()](#err-0), [econfd:socket()](econfd.md#socket-0)
-
-Do in-service upgrade.
-
-
-### prepare_trans/2
-
-```erlang
--spec prepare_trans(Socket, Tid) -> ok | err()
- when Socket :: econfd:socket(), Tid :: integer().
-```
-
-Related types: [err()](#err-0), [econfd:socket()](econfd.md#socket-0)
-
-Equivalent to [prepare_trans(Socket, Tid, 0)](#prepare_trans-3).
-
-
-### prepare_trans/3
-
-```erlang
--spec prepare_trans(Socket, Tid, Flags) -> ok | err()
- when
- Socket :: econfd:socket(),
- Tid :: integer(),
- Flags :: non_neg_integer().
-```
-
-Related types: [err()](#err-0), [econfd:socket()](econfd.md#socket-0)
-
-Prepare for commit.
-
-
-### prio_message/3
-
-```erlang
--spec prio_message(Socket, To, Message) -> ok | err()
- when
- Socket :: econfd:socket(),
- To :: binary(),
- Message :: binary().
-```
-
-Related types: [err()](#err-0), [econfd:socket()](econfd.md#socket-0)
-
-Write priority message.
-
-
-### progress_info/6
-
-```erlang
--spec progress_info(Socket, Verbosity, Msg, SIKP, Attrs, Links) -> ok
- when
- Socket :: econfd:socket(),
- Verbosity :: verbosity(),
- Msg :: iolist(),
- SIKP :: econfd:ikeypath(),
- Attrs ::
- [{K :: binary(),
- V :: binary() | integer()}],
- Links ::
- [{TraceId :: binary() | undefined,
- SpanId :: binary() | undefined}].
-```
-
-Related types: [verbosity()](#verbosity-0), [econfd:ikeypath()](econfd.md#ikeypath-0), [econfd:socket()](econfd.md#socket-0)
-
-### progress_info_th/7
-
-```erlang
--spec progress_info_th(Socket, Tid, Verbosity, Msg, SIKP, Attrs, Links) ->
- ok
- when
- Socket :: econfd:socket(),
- Tid :: integer(),
- Verbosity :: verbosity(),
- Msg :: iolist(),
- SIKP :: econfd:ikeypath(),
- Attrs ::
- [{K :: binary(),
- V :: binary() | integer()}],
- Links ::
- [{TraceId :: binary() | undefined,
- SpanId :: binary() | undefined}].
-```
-
-Related types: [verbosity()](#verbosity-0), [econfd:ikeypath()](econfd.md#ikeypath-0), [econfd:socket()](econfd.md#socket-0)
-
-### reload_config/1
-
-```erlang
--spec reload_config(Socket) -> ok | err() when Socket :: econfd:socket().
-```
-
-Related types: [err()](#err-0), [econfd:socket()](econfd.md#socket-0)
-
-Tell ConfD daemon to reload its configuration.
-
-
-### request_action/3
-
-```erlang
--spec request_action(Socket, Params, IKeypath) -> Result
- when
- Socket :: econfd:socket(),
- Params :: [econfd:tagval()],
- IKeypath :: econfd:ikeypath(),
- Result ::
- ok | {ok, [econfd:tagval()]} | err().
-```
-
-Related types: [err()](#err-0), [econfd:ikeypath()](econfd.md#ikeypath-0), [econfd:socket()](econfd.md#socket-0), [econfd:tagval()](econfd.md#tagval-0)
-
-Invoke an action defined in the data model.
-
-
-### request_action_th/4
-
-```erlang
--spec request_action_th(Socket, Tid, Params, IKeypath) -> Result
- when
- Socket :: econfd:socket(),
- Tid :: integer(),
- Params :: [econfd:tagval()],
- IKeypath :: econfd:ikeypath(),
- Result ::
- ok | {ok, [econfd:tagval()]} | err().
-```
-
-Related types: [err()](#err-0), [econfd:ikeypath()](econfd.md#ikeypath-0), [econfd:socket()](econfd.md#socket-0), [econfd:tagval()](econfd.md#tagval-0)
-
-Invoke an action defined in the data model using the provided transaction.
-
-Does the same thing as request_action/3, but uses the current namespace, the path position, and the user session from the transaction indicated by the 'Tid' handle.
-
-
-### reverse/1
-
-```erlang
-reverse(X)
-```
-
-### revert/2
-
-```erlang
--spec revert(Socket, Tid) -> ok | err()
- when Socket :: econfd:socket(), Tid :: integer().
-```
-
-Related types: [err()](#err-0), [econfd:socket()](econfd.md#socket-0)
-
-Remove all changes in the transaction.
-
-
-### set_attr/5
-
-```erlang
--spec set_attr(Socket, Tid, IKeypath, Attr, Value) -> ok | err()
- when
- Socket :: econfd:socket(),
- Tid :: integer(),
- IKeypath :: econfd:ikeypath(),
- Attr :: integer(),
- Value :: econfd:value() | undefined.
-```
-
-Related types: [err()](#err-0), [econfd:ikeypath()](econfd.md#ikeypath-0), [econfd:socket()](econfd.md#socket-0), [econfd:value()](econfd.md#value-0)
-
-Set the an attribute for an element. Value == undefined means that the attribute should be deleted.
-
-
-### set_comment/3
-
-```erlang
--spec set_comment(Socket, Tid, Comment) -> ok | err()
- when
- Socket :: econfd:socket(),
- Tid :: integer(),
- Comment :: binary().
-```
-
-Related types: [err()](#err-0), [econfd:socket()](econfd.md#socket-0)
-
-Set the "Comment" that is stored in the rollback file when a transaction towards running is committed.
-
-
-### set_delayed_when/3
-
-```erlang
--spec set_delayed_when(Socket, Tid, Value) -> Result
- when
- Socket :: econfd:socket(),
- Tid :: integer(),
- Value :: boolean(),
- Result :: {ok, OldValue} | err().
-```
-
-Related types: [err()](#err-0), [econfd:socket()](econfd.md#socket-0)
-
-Enable/disable the "delayed when" mode for a transaction.
-
-Returns the old setting on success.
-
-
-### set_elem/4
-
-```erlang
--spec set_elem(Socket, Tid, IKeypath, Value) -> ok | err()
- when
- Socket :: econfd:socket(),
- Tid :: integer(),
- IKeypath :: econfd:ikeypath(),
- Value :: econfd:value().
-```
-
-Related types: [err()](#err-0), [econfd:ikeypath()](econfd.md#ikeypath-0), [econfd:socket()](econfd.md#socket-0), [econfd:value()](econfd.md#value-0)
-
-Write an element.
-
-
-### set_elem2/4
-
-```erlang
--spec set_elem2(Socket, Tid, IKeypath, BinValue) -> ok | err()
- when
- Socket :: econfd:socket(),
- Tid :: integer(),
- IKeypath :: econfd:ikeypath(),
- BinValue :: binary().
-```
-
-Related types: [err()](#err-0), [econfd:ikeypath()](econfd.md#ikeypath-0), [econfd:socket()](econfd.md#socket-0)
-
-Write an element using the textual value representation.
-
-
-### set_flags/3
-
-```erlang
--spec set_flags(Socket, Tid, Flags) -> ok | err()
- when
- Socket :: econfd:socket(),
- Tid :: integer(),
- Flags :: non_neg_integer().
-```
-
-Related types: [err()](#err-0), [econfd:socket()](econfd.md#socket-0)
-
-Change flag settings for a transaction.
-
-See ?MAAPI_FLAG_XXX in econfd.hrl for the available flags, however ?MAAPI_FLAG_HIDE_INACTIVE ?MAAPI_FLAG_DELAYED_WHEN and ?MAAPI_FLAG_HIDE_ALL_HIDEGROUPS cannot be changed after transaction start (but see `set_delayed_when/3`).
-
-
-### set_label/3
-
-```erlang
--spec set_label(Socket, Tid, Label) -> ok | err()
- when
- Socket :: econfd:socket(),
- Tid :: integer(),
- Label :: binary().
-```
-
-Related types: [err()](#err-0), [econfd:socket()](econfd.md#socket-0)
-
-Set the "Label" that is stored in the rollback file when a transaction towards running is committed.
-
-
-### set_object/4
-
-```erlang
--spec set_object(Socket, Tid, IKeypath, ValueList) -> ok | err()
- when
- Socket :: econfd:socket(),
- Tid :: integer(),
- IKeypath :: econfd:ikeypath(),
- ValueList :: [econfd:value()].
-```
-
-Related types: [err()](#err-0), [econfd:ikeypath()](econfd.md#ikeypath-0), [econfd:socket()](econfd.md#socket-0), [econfd:value()](econfd.md#value-0)
-
-Write an entire object, i.e. YANG list entry or container.
-
-
-### set_readonly_mode/2
-
-```erlang
--spec set_readonly_mode(Socket, Mode) -> {ok, boolean()} | err()
- when
- Socket :: econfd:socket(),
- Mode :: boolean().
-```
-
-Related types: [err()](#err-0), [econfd:socket()](econfd.md#socket-0)
-
-Control if we can create rw transactions.
-
-
-### set_running_db_status/2
-
-```erlang
--spec set_running_db_status(Socket, Status) -> ok | err()
- when
- Socket :: econfd:socket(),
- Status :: Valid | InValid.
-```
-
-Related types: [err()](#err-0), [econfd:socket()](econfd.md#socket-0)
-
-Set the "running status".
-
-
-### set_user_session/2
-
-```erlang
--spec set_user_session(Socket, USid) -> ok | err()
- when
- Socket :: econfd:socket(),
- USid :: integer().
-```
-
-Related types: [err()](#err-0), [econfd:socket()](econfd.md#socket-0)
-
-Assign a user session.
-
-
-### set_values/4
-
-```erlang
--spec set_values(Socket, Tid, IKeypath, ValueList) -> ok | err()
- when
- Socket :: econfd:socket(),
- Tid :: integer(),
- IKeypath :: econfd:ikeypath(),
- ValueList :: [econfd:tagval()].
-```
-
-Related types: [err()](#err-0), [econfd:ikeypath()](econfd.md#ikeypath-0), [econfd:socket()](econfd.md#socket-0), [econfd:tagval()](econfd.md#tagval-0)
-
-Write a list of tagged values.
-
-This function is an alternative to `set_object/4`, and allows for writing more complex structures (e.g. multiple entries in a list).
-
-
-### shared_create/3
-
-```erlang
--spec shared_create(Socket, Tid, IKeypath) -> ok | err()
- when
- Socket :: econfd:socket(),
- Tid :: integer(),
- IKeypath :: econfd:ikeypath().
-```
-
-Related types: [err()](#err-0), [econfd:ikeypath()](econfd.md#ikeypath-0), [econfd:socket()](econfd.md#socket-0)
-
-Create a new element, and also set an attribute indicating how many times this element has been created.
-
-
-### shared_set_elem/4
-
-```erlang
--spec shared_set_elem(Socket, Tid, IKeypath, Value) -> ok | err()
- when
- Socket :: econfd:socket(),
- Tid :: integer(),
- IKeypath :: econfd:ikeypath(),
- Value :: econfd:value().
-```
-
-Related types: [err()](#err-0), [econfd:ikeypath()](econfd.md#ikeypath-0), [econfd:socket()](econfd.md#socket-0), [econfd:value()](econfd.md#value-0)
-
-Write an element from NCS FastMap.
-
-
-### shared_set_elem2/4
-
-```erlang
--spec shared_set_elem2(Socket, Tid, IKeypath, BinValue) -> ok | err()
- when
- Socket :: econfd:socket(),
- Tid :: integer(),
- IKeypath :: econfd:ikeypath(),
- BinValue :: binary().
-```
-
-Related types: [err()](#err-0), [econfd:ikeypath()](econfd.md#ikeypath-0), [econfd:socket()](econfd.md#socket-0)
-
-Write an element using the textual value representation from NCS fastmap.
-
-
-### shared_set_values/4
-
-```erlang
--spec shared_set_values(Socket, Tid, IKeypath, ValueList) -> ok | err()
- when
- Socket :: econfd:socket(),
- Tid :: integer(),
- IKeypath :: econfd:ikeypath(),
- ValueList :: [econfd:tagval()].
-```
-
-Related types: [err()](#err-0), [econfd:ikeypath()](econfd.md#ikeypath-0), [econfd:socket()](econfd.md#socket-0), [econfd:tagval()](econfd.md#tagval-0)
-
-Write a list of tagged values from NCS FastMap.
-
-
-### snmpa_reload/2
-
-```erlang
--spec snmpa_reload(Socket, Synchronous) -> ok | err()
- when
- Socket :: econfd:socket(),
- Synchronous :: boolean().
-```
-
-Related types: [err()](#err-0), [econfd:socket()](econfd.md#socket-0)
-
-Tell ConfD to reload external SNMP Agent config data.
-
-
-### start_phase/3
-
-```erlang
--spec start_phase(Socket, Phase, Synchronous) -> ok | err()
- when
- Socket :: econfd:socket(),
- Phase :: 1 | 2,
- Synchronous :: boolean().
-```
-
-Related types: [err()](#err-0), [econfd:socket()](econfd.md#socket-0)
-
-Tell ConfD to proceed to next start phase.
-
-
-### start_progress_span/6
-
-```erlang
--spec start_progress_span(Socket, Verbosity, Msg, SIKP, Attrs, Links) ->
- Result
- when
- Socket :: econfd:socket(),
- Verbosity :: verbosity(),
- Msg :: iolist(),
- SIKP :: econfd:ikeypath(),
- Attrs ::
- [{K :: binary(),
- V :: binary() | integer()}],
- Links ::
- [{TraceId :: binary() | undefined,
- SpanId1 :: binary() | undefined}],
- Result ::
- {ok,
- {SpanId2 :: binary() | undefined,
- TraceId :: binary() | undefined}}.
-```
-
-Related types: [verbosity()](#verbosity-0), [econfd:ikeypath()](econfd.md#ikeypath-0), [econfd:socket()](econfd.md#socket-0)
-
-### start_progress_span_th/7
-
-```erlang
--spec start_progress_span_th(Socket, Tid, Verbosity, Msg, SIKP, Attrs,
- Links) ->
- Result
- when
- Socket :: econfd:socket(),
- Tid :: integer(),
- Verbosity :: verbosity(),
- Msg :: iolist(),
- SIKP :: econfd:ikeypath(),
- Attrs ::
- [{K :: binary(),
- V :: binary() | integer()}],
- Links ::
- [{TraceId ::
- binary() | undefined,
- SpanId1 ::
- binary() | undefined}],
- Result ::
- {ok,
- {SpanId2 ::
- binary() | undefined,
- TraceId ::
- binary() | undefined}}.
-```
-
-Related types: [verbosity()](#verbosity-0), [econfd:ikeypath()](econfd.md#ikeypath-0), [econfd:socket()](econfd.md#socket-0)
-
-### start_trans/3
-
-```erlang
--spec start_trans(Socket, DbName, RwMode) -> Result
- when
- Socket :: econfd:socket(),
- DbName :: dbname(),
- RwMode :: integer(),
- Result :: {ok, integer()} | err().
-```
-
-Related types: [dbname()](#dbname-0), [err()](#err-0), [econfd:socket()](econfd.md#socket-0)
-
-Start a new transaction.
-
-
-### start_trans/4
-
-```erlang
--spec start_trans(Socket, DbName, RwMode, USid) -> Result
- when
- Socket :: econfd:socket(),
- DbName :: dbname(),
- RwMode :: integer(),
- USid :: integer(),
- Result :: {ok, integer()} | err().
-```
-
-Related types: [dbname()](#dbname-0), [err()](#err-0), [econfd:socket()](econfd.md#socket-0)
-
-Start a new transaction within an existing user session.
-
-
-### start_trans/5
-
-```erlang
--spec start_trans(Socket, DbName, RwMode, USid, Flags) -> Result
- when
- Socket :: econfd:socket(),
- DbName :: dbname(),
- RwMode :: integer(),
- USid :: integer(),
- Flags :: non_neg_integer(),
- Result :: {ok, integer()} | err().
-```
-
-Related types: [dbname()](#dbname-0), [err()](#err-0), [econfd:socket()](econfd.md#socket-0)
-
-Start a new transaction within an existing user session and/or with flags.
-
-See ?MAAPI_FLAG_XXX in econfd.hrl for the available flags. To use the existing user session of the socket, give Usid = 0.
-
-
-### start_trans/6
-
-```erlang
-start_trans(Sock, DbName, RwMode, Usid, Flags, UId)
-```
-
-### start_trans_in_trans/4
-
-```erlang
--spec start_trans_in_trans(Socket, RwMode, USid, Tid) -> Result
- when
- Socket :: econfd:socket(),
- RwMode :: integer(),
- USid :: integer(),
- Tid :: integer(),
- Result :: {ok, integer()} | err().
-```
-
-Related types: [err()](#err-0), [econfd:socket()](econfd.md#socket-0)
-
-Start a new transaction with an existing transaction as backend.
-
-To use the existing user session of the socket, give Usid = 0.
-
-
-### start_trans_in_trans/5
-
-```erlang
--spec start_trans_in_trans(Socket, RwMode, USid, Tid, Flags) -> Result
- when
- Socket :: econfd:socket(),
- RwMode :: integer(),
- USid :: integer(),
- Tid :: integer(),
- Flags :: non_neg_integer(),
- Result :: {ok, integer()} | err().
-```
-
-Related types: [err()](#err-0), [econfd:socket()](econfd.md#socket-0)
-
-Start a new transaction with an existing transaction as backend.
-
-To use the existing user session of the socket, give Usid = 0.
-
-
-### start_user_session/6
-
-```erlang
--spec start_user_session(Socket, UserName, Context, Groups, SrcIp,
- Proto) ->
- ok | err()
- when
- Socket :: econfd:socket(),
- UserName :: binary(),
- Context :: binary(),
- Groups :: [binary()],
- SrcIp :: econfd:ip(),
- Proto :: proto().
-```
-
-Related types: [err()](#err-0), [proto()](#proto-0), [econfd:ip()](econfd.md#ip-0), [econfd:socket()](econfd.md#socket-0)
-
-Equivalent to [start_user_session(Socket, UserName, Context, Groups, SrcIp, 0, Proto)](#start_user_session-7).
-
-
-### start_user_session/7
-
-```erlang
--spec start_user_session(Socket, UserName, Context, Groups, SrcIp,
- SrcPort, Proto) ->
- ok | err()
- when
- Socket :: econfd:socket(),
- UserName :: binary(),
- Context :: binary(),
- Groups :: [binary()],
- SrcIp :: econfd:ip(),
- SrcPort :: non_neg_integer(),
- Proto :: proto().
-```
-
-Related types: [err()](#err-0), [proto()](#proto-0), [econfd:ip()](econfd.md#ip-0), [econfd:socket()](econfd.md#socket-0)
-
-Equivalent to [start_user_session(Socket, UserName, Context, Groups, SrcIp, 0, Proto, undefined)](#start_user_session-8).
-
-
-### start_user_session/8
-
-```erlang
--spec start_user_session(Socket, UserName, Context, Groups, SrcIp,
- SrcPort, Proto, UId) ->
- ok | err()
- when
- Socket :: econfd:socket(),
- UserName :: binary(),
- Context :: binary(),
- Groups :: [binary()],
- SrcIp :: econfd:ip(),
- SrcPort :: non_neg_integer(),
- Proto :: proto(),
- UId ::
- confd_user_identification() |
- undefined.
-```
-
-Related types: [confd\_user\_identification()](#confd_user_identification-0), [err()](#err-0), [proto()](#proto-0), [econfd:ip()](econfd.md#ip-0), [econfd:socket()](econfd.md#socket-0)
-
-Initiate a new maapi user session.
-
-returns a maapi session id. Before we can execute any maapi functions we must always have an associated user session.
-
-
-### stop/1
-
-```erlang
--spec stop(Socket) -> ok when Socket :: econfd:socket().
-```
-
-Related types: [econfd:socket()](econfd.md#socket-0)
-
-Equivalent to [stop(Sock, true)](#stop-2).
-
-Tell ConfD daemon to stop, returns when daemon has exited.
-
-
-### stop/2
-
-```erlang
--spec stop(Socket, Synchronous) -> ok
- when Socket :: econfd:socket(), Synchronous :: boolean().
-```
-
-Related types: [econfd:socket()](econfd.md#socket-0)
-
-Tell ConfD daemon to stop, if Synchronous is true won't return until daemon has come to a halt.
-
-Note that the socket will most certainly not be possible to use again, since ConfD will close its end when it exits.
-
-
-### sys_message/3
-
-```erlang
--spec sys_message(Socket, To, Message) -> ok | err()
- when
- Socket :: econfd:socket(),
- To :: binary(),
- Message :: binary().
-```
-
-Related types: [err()](#err-0), [econfd:socket()](econfd.md#socket-0)
-
-Write system message.
-
-
-### unhide_group/3
-
-```erlang
--spec unhide_group(Socket, Tid, GroupName) -> ok | err()
- when
- Socket :: econfd:socket(),
- Tid :: integer(),
- GroupName :: binary().
-```
-
-Related types: [err()](#err-0), [econfd:socket()](econfd.md#socket-0)
-
-Do unhide a hide group.
-
-Unhide all nodes belonging to a hide group in a transaction that started with flag FLAG_HIDE_ALL_HIDEGROUPS.
-
-
-### unlock/2
-
-```erlang
--spec unlock(Socket, DbName) -> ok | err()
- when Socket :: econfd:socket(), DbName :: dbname().
-```
-
-Related types: [dbname()](#dbname-0), [err()](#err-0), [econfd:socket()](econfd.md#socket-0)
-
-Unlock a database.
-
-
-### unlock_partial/2
-
-```erlang
--spec unlock_partial(Socket, LockId) -> ok | err()
- when
- Socket :: econfd:socket(),
- LockId :: integer().
-```
-
-Related types: [err()](#err-0), [econfd:socket()](econfd.md#socket-0)
-
-Remove the partial lock identified by LockId.
-
-
-### user_message/4
-
-```erlang
--spec user_message(Socket, To, From, Message) -> ok | err()
- when
- Socket :: econfd:socket(),
- To :: binary(),
- From :: binary(),
- Message :: binary().
-```
-
-Related types: [err()](#err-0), [econfd:socket()](econfd.md#socket-0)
-
-Write user message.
-
-
-### validate_trans/4
-
-```erlang
--spec validate_trans(Socket, Tid, UnLock, ForceValidation) -> ok | err()
- when
- Socket :: econfd:socket(),
- Tid :: integer(),
- UnLock :: boolean(),
- ForceValidation :: boolean().
-```
-
-Related types: [err()](#err-0), [econfd:socket()](econfd.md#socket-0)
-
-Validate the transaction.
-
-
-### wait_start/1
-
-```erlang
--spec wait_start(Socket) -> ok | err() when Socket :: econfd:socket().
-```
-
-Related types: [err()](#err-0), [econfd:socket()](econfd.md#socket-0)
-
-Equivalent to [wait_start(Socket, 2)](#wait_start-2).
-
-Wait until ConfD daemon has completely started.
-
-
-### wait_start/2
-
-```erlang
--spec wait_start(Socket, Phase) -> ok | err()
- when Socket :: econfd:socket(), Phase :: 1 | 2.
-```
-
-Related types: [err()](#err-0), [econfd:socket()](econfd.md#socket-0)
-
-Wait until ConfD daemon has reached a certain start phase.
-
-
-### xpath_eval/6
-
-```erlang
--spec xpath_eval(Socket, Tid, Expr, ResultFun, State, Options) -> Result
- when
- Socket :: econfd:socket(),
- Tid :: integer(),
- Expr :: binary() | {compiled, Source, Compiled},
- ResultFun ::
- fun((IKeypath, Value, State) -> {Ret, State}),
- State :: term(),
- Options ::
- [xpath_eval_option() | {initstate, term()}],
- Result :: {ok, State} | err().
-```
-
-Related types: [err()](#err-0), [xpath\_eval\_option()](#xpath_eval_option-0), [econfd:socket()](econfd.md#socket-0)
-
-Evaluate the XPath expression Expr, invoking ResultFun for each node in the resulting node set.
-
-The possible values for Ret in the return value for ResultFun are ?ITER_CONTINUE and ?ITER_STOP.
-
-
-### xpath_eval/7
-
-```erlang
--spec xpath_eval(Socket, Tid, Expr, ResultFun, TraceFun, State, Context) ->
- Result
- when
- Socket :: econfd:socket(),
- Tid :: integer(),
- Expr :: binary(),
- ResultFun ::
- fun((IKeypath, Value, State) -> {Ret, State}),
- TraceFun ::
- fun((binary()) -> none()) | undefined,
- State :: term(),
- Context :: econfd:ikeypath() | [],
- Result :: {ok, State} | {error, term()}.
-```
-
-Related types: [econfd:ikeypath()](econfd.md#ikeypath-0), [econfd:socket()](econfd.md#socket-0)
-
-Evaluate the XPath expression Expr, invoking ResultFun for each node in the resulting node set.
-
-The possible values for Ret in the return value for ResultFun are ?ITER_CONTINUE and ?ITER_STOP.
-
-
-### xpath_eval_expr/4
-
-```erlang
--spec xpath_eval_expr(Socket, Tid, Expr, Options) -> Result
- when
- Socket :: econfd:socket(),
- Tid :: integer(),
- Expr ::
- binary() | {compiled, Source, Compiled},
- Options :: [xpath_eval_option()],
- Result :: {ok, binary()} | err().
-```
-
-Related types: [err()](#err-0), [xpath\_eval\_option()](#xpath_eval_option-0), [econfd:socket()](econfd.md#socket-0)
-
-Evaluate the XPath expression Expr, returning the result as a string.
-
-
-### xpath_eval_expr/5
-
-```erlang
--spec xpath_eval_expr(Socket, Tid, Expr, TraceFun, Context) -> Result
- when
- Socket :: econfd:socket(),
- Tid :: integer(),
- Expr :: binary(),
- TraceFun ::
- fun((binary()) -> none()) | undefined,
- Context :: econfd:ikeypath() | [],
- Result :: {ok, binary()} | err().
-```
-
-Related types: [err()](#err-0), [econfd:ikeypath()](econfd.md#ikeypath-0), [econfd:socket()](econfd.md#socket-0)
-
-Evaluate the XPath expression Expr, returning the result as a string.
-
-
-### xpath_eval_expr_loop/2
-
-```erlang
-xpath_eval_expr_loop(Sock, TraceFun)
-```
-
-### xpath_eval_loop/4
-
-```erlang
-xpath_eval_loop(Sock, ResultFun, TraceFun, State)
-```
diff --git a/developer-reference/erlang/econfd_notif.md b/developer-reference/erlang/econfd_notif.md
deleted file mode 100644
index 6a992346..00000000
--- a/developer-reference/erlang/econfd_notif.md
+++ /dev/null
@@ -1,210 +0,0 @@
-# Module econfd_notif
-
-An Erlang interface equivalent to the event notifications C-API, (documented in confd_lib_events(3)).
-
-
-## Types
-
-### notif_option/0
-
-```erlang
--type notif_option() ::
- {heartbeat_interval, integer()} |
- {health_check_interval, integer()} |
- {stream_name, atom()} |
- {start_time, econfd:datetime()} |
- {stop_time, econfd:datetime()} |
- {xpath_filter, binary()} |
- {usid, integer()} |
- {verbosity, 0..3}.
-```
-
-Related types: [econfd:datetime()](econfd.md#datetime-0)
-
-\-------------------------------------------------------------------- External functions --------------------------------------------------------------------
-
-
-### notification/0
-
-```erlang
--type notification() ::
- #econfd_notif_audit{} |
- #econfd_notif_syslog{} |
- #econfd_notif_commit_simple{} |
- #econfd_notif_commit_diff{} |
- #econfd_notif_user_session{} |
- #econfd_notif_ha{} |
- #econfd_notif_subagent_info{} |
- #econfd_notif_commit_failed{} |
- #econfd_notif_snmpa{} |
- #econfd_notif_forward_info{} |
- #econfd_notif_confirmed_commit{} |
- #econfd_notif_upgrade{} |
- #econfd_notif_progress{} |
- #econfd_notif_stream_event{} |
- #econfd_notif_confd_compaction{} |
- #econfd_notif_ncs_cq_progress{} |
- #econfd_notif_ncs_audit_network{} |
- confd_heartbeat | confd_health_check | confd_reopen_logs |
- ncs_package_reload.
-```
-
-## Functions
-
-### close/1
-
-```erlang
--spec close(Socket) -> Result
- when
- Socket :: econfd:socket(),
- Result :: ok | {error, econfd:error_reason()}.
-```
-
-Related types: [econfd:error\_reason()](econfd.md#error_reason-0), [econfd:socket()](econfd.md#socket-0)
-
-Close the event notification connection.
-
-
-### connect/2
-
-```erlang
--spec connect(Path, Mask) -> econfd:connect_result()
- when Path :: string(), Mask :: integer();
- (Address, Mask) -> econfd:connect_result()
- when Address :: econfd:ip(), Mask :: integer().
-```
-
-Related types: [econfd:connect\_result()](econfd.md#connect_result-0), [econfd:ip()](econfd.md#ip-0)
-
-### connect/3
-
-```erlang
--spec connect(Path, Mask, Options) -> econfd:connect_result()
- when
- Path :: string(),
- Mask :: integer(),
- Options :: [notif_option()];
- (Address, Port, Mask) -> econfd:connect_result()
- when
- Address :: econfd:ip(),
- Port :: non_neg_integer(),
- Mask :: integer().
-```
-
-Related types: [notif\_option()](#notif_option-0), [econfd:connect\_result()](econfd.md#connect_result-0), [econfd:ip()](econfd.md#ip-0)
-
-### connect/4
-
-```erlang
-connect(Address, Port, Mask, Options)
-```
-
-### do_connect/3
-
-```erlang
--spec do_connect(Address, Mask, Options) -> econfd:connect_result()
- when
- Address ::
- #econfd_conn_ip{} | #econfd_conn_local{},
- Mask :: integer(),
- Options :: [Option].
-```
-
-Related types: [econfd:connect\_result()](econfd.md#connect_result-0)
-
-Connect to the notif server.
-
-
-### handle_notif/1
-
-```erlang
--spec handle_notif(Notif) -> notification()
- when Notif :: binary() | term().
-```
-
-Related types: [notification()](#notification-0)
-
-Decode the notif message and return corresponding record depending on the type of the message.
-
-It is the resposibility of the application to read data from the notifications socket.
-
-
-### maybe_element/2
-
-```erlang
-maybe_element(N, Tuple)
-```
-
-### notification_done/2
-
-```erlang
--spec notification_done(Socket, Thandle) -> Result
- when
- Socket :: econfd:socket(),
- Thandle :: integer(),
- Result ::
- ok | {error, econfd:error_reason()}.
-```
-
-Related types: [econfd:error\_reason()](econfd.md#error_reason-0), [econfd:socket()](econfd.md#socket-0)
-
-Indicate that we're done with diff processing.
-
-Whenever we subscribe to ?CONFD_NOTIF_COMMIT_DIFF we must indicate to confd that we're done with the diff processing. The transaction hangs until we've done this.
-
-
-### notification_done/3
-
-```erlang
--spec notification_done(Socket, Usid, NotifType) -> Result
- when
- Socket :: econfd:socket(),
- Usid :: integer(),
- NotifType :: audit | audit_network,
- Result ::
- ok | {error, econfd:error_reason()}.
-```
-
-Related types: [econfd:error\_reason()](econfd.md#error_reason-0), [econfd:socket()](econfd.md#socket-0)
-
-Indicate that we're done with notif processing.
-
-When we subscribe to ?CONFD_NOTIF_AUDIT with ?CONFD_NOTIF_AUDIT_SYNC or to ?NCS_NOTIF_AUDIT_NETWORK with ?NCS_NOTIF_AUDIT_NETWORK_SYNC, we must indicate that we're done with the notif processing. The user-session hangs until we've done this.
-
-
-### recv/1
-
-```erlang
-recv(Socket)
-```
-
-Equivalent to [recv(Socket, infinity)](#recv-2).
-
-
-### recv/2
-
-```erlang
--spec recv(Socket, Timeout) -> Result
- when
- Socket :: econfd:socket(),
- Timeout :: non_neg_integer() | infinity,
- Result ::
- {ok, notification()} |
- {error, econfd:transport_error()} |
- {error, econfd:error_reason()}.
-```
-
-Related types: [notification()](#notification-0), [econfd:error\_reason()](econfd.md#error_reason-0), [econfd:socket()](econfd.md#socket-0), [econfd:transport\_error()](econfd.md#transport_error-0)
-
-Wait for an event notification message and return corresponding record depending on the type of the message.
-
-The logno element in the record is an integer. These integers can be used as an index to the function `econfd_logsyms:get_logsym/1` in order to get a textual description for the event.
-
-When recv/2 returns \{error, timeout\} the connection (and its event subscriptions) is still active and the application needs to call recv/2 again. But if recv/2 returns \{error, Reason\} the connection to ConfD is closed and all event subscriptions associated with it are cleared.
-
-
-### unpack_ha_node/1
-
-```erlang
-unpack_ha_node(_)
-```
diff --git a/developer-reference/erlang/econfd_schema.md b/developer-reference/erlang/econfd_schema.md
deleted file mode 100644
index 50a7e55b..00000000
--- a/developer-reference/erlang/econfd_schema.md
+++ /dev/null
@@ -1,206 +0,0 @@
-# Module econfd_schema
-
-Support for using schema information in the Erlang API.
-
-Keeps schema info in a set of ets tables named by the toplevel namespace.
-
-
-## Types
-
-### confd_cs_choice/0
-
-```erlang
--type confd_cs_choice() :: #confd_cs_choice{}.
-```
-
-### confd_cs_node/0
-
-```erlang
--type confd_cs_node() :: #confd_cs_node{}.
-```
-
-### confd_nsinfo/0
-
-```erlang
--type confd_nsinfo() :: #confd_nsinfo{}.
-```
-
-### confd_type_cbs/0
-
-```erlang
--type confd_type_cbs() :: #confd_type_cbs{}.
-```
-
-## Functions
-
-### choice_children/1
-
-```erlang
--spec choice_children(Node) -> Children
- when
- Node ::
- confd_cs_node() |
- [econfd:qtag() | confd_cs_choice()],
- Children :: [econfd:qtag()].
-```
-
-Related types: [confd\_cs\_choice()](#confd_cs_choice-0), [confd\_cs\_node()](#confd_cs_node-0), [econfd:qtag()](econfd.md#qtag-0)
-
-Get a flat list of children for a [`confd_cs_node()`](#confd_cs_node-0), with any choice/case structure(s) removed.
-
-
-### get_builtin_type/1
-
-```erlang
-get_builtin_type(_)
-```
-
-### get_cs/2
-
-```erlang
--spec get_cs(Ns, Tagpath) -> Result
- when
- Ns :: econfd:namespace(),
- Tagpath :: econfd:tagpath(),
- Result :: confd_cs_node() | not_found.
-```
-
-Related types: [confd\_cs\_node()](#confd_cs_node-0), [econfd:namespace()](econfd.md#namespace-0), [econfd:tagpath()](econfd.md#tagpath-0)
-
-Find schema node by namespace and tagpath.
-
-
-### get_nslist/0
-
-```erlang
--spec get_nslist() -> [confd_nsinfo()].
-```
-
-Related types: [confd\_nsinfo()](#confd_nsinfo-0)
-
-Get a list of loaded namespaces with info.
-
-
-### get_type/1
-
-```erlang
--spec get_type(TypeName) -> Result
- when
- TypeName :: atom(),
- Result :: econfd:type() | not_found.
-```
-
-Related types: [econfd:type()](econfd.md#type-0)
-
-Get schema type definition identifier for built-in type.
-
-
-### get_type/2
-
-```erlang
--spec get_type(Ns, TypeName) -> econfd:type()
- when Ns :: econfd:namespace(), TypeName :: atom().
-```
-
-Related types: [econfd:namespace()](econfd.md#namespace-0), [econfd:type()](econfd.md#type-0)
-
-Get schema type definition identifier for type defined in namespace.
-
-
-### ikeypath2cs/1
-
-```erlang
--spec ikeypath2cs(IKeypath) -> Result
- when
- IKeypath :: econfd:ikeypath(),
- Result :: confd_cs_node() | not_found.
-```
-
-Related types: [confd\_cs\_node()](#confd_cs_node-0), [econfd:ikeypath()](econfd.md#ikeypath-0)
-
-Find schema node by ikeypath.
-
-
-### ikeypath2nstagpath/1
-
-```erlang
-ikeypath2nstagpath(IKeypath)
-```
-
-### ikeypath2nstagpath/2
-
-```erlang
-ikeypath2nstagpath(T, Acc)
-```
-
-### load/1
-
-```erlang
--spec load(Path) -> Result
- when
- Path :: string(),
- Result :: ok | {error, econfd:error_reason()}.
-```
-
-Related types: [econfd:error\_reason()](econfd.md#error_reason-0)
-
-Load schema info from ConfD.
-
-
-### load/2
-
-```erlang
--spec load(Address, Port) -> Result
- when
- Address :: econfd:ip(),
- Port :: non_neg_integer(),
- Result :: ok | {error, econfd:error_reason()}.
-```
-
-Related types: [econfd:error\_reason()](econfd.md#error_reason-0), [econfd:ip()](econfd.md#ip-0)
-
-### register_type_cbs/1
-
-```erlang
--spec register_type_cbs(TypeCbs) -> ok when TypeCbs :: confd_type_cbs().
-```
-
-Related types: [confd\_type\_cbs()](#confd_type_cbs-0)
-
-Register callbacks for a user-defined type. For an application running in its own Erlang VM, this function registers the callbacks in the loaded schema information, similar to confd_register_node_type() in the C API. For an application running inside ConfD, this function registers the callbacks in ConfD's internal schema information, similar to using a shared object with confd_type_cb_init() in the C API.
-
-
-### str2val/2
-
-```erlang
--spec str2val(TypeId, Lexical) -> Result
- when
- TypeId :: confd_cs_node() | econfd:type(),
- Lexical :: binary(),
- Result ::
- {ok, Value :: econfd:value()} |
- {error, econfd:error_reason()}.
-```
-
-Related types: [confd\_cs\_node()](#confd_cs_node-0), [econfd:error\_reason()](econfd.md#error_reason-0), [econfd:type()](econfd.md#type-0), [econfd:value()](econfd.md#value-0)
-
-Convert string to value based on schema type.
-
-Note: For type identityref below a mount point (device data in NSO), TypeId must be [`confd_cs_node()`](#confd_cs_node-0).
-
-
-### val2str/2
-
-```erlang
--spec val2str(TypeId, Value) -> Result
- when
- TypeId :: confd_cs_node() | econfd:type(),
- Value :: econfd:value(),
- Result ::
- {ok, string()} | {error, econfd:error_reason()}.
-```
-
-Related types: [confd\_cs\_node()](#confd_cs_node-0), [econfd:error\_reason()](econfd.md#error_reason-0), [econfd:type()](econfd.md#type-0), [econfd:value()](econfd.md#value-0)
-
-Convert value to string based on schema type.
-
diff --git a/developer-reference/erlang/pics/arch.png b/developer-reference/erlang/pics/arch.png
deleted file mode 100644
index ae8c8676..00000000
Binary files a/developer-reference/erlang/pics/arch.png and /dev/null differ
diff --git a/developer-reference/java-api-reference.md b/developer-reference/java-api-reference.md
deleted file mode 100644
index d058f2db..00000000
--- a/developer-reference/java-api-reference.md
+++ /dev/null
@@ -1,10 +0,0 @@
----
-description: NSO Java API Reference.
-icon: square-j
----
-
-# Java API Reference
-
-Visit the link below to learn more.
-
-{% embed url="https://developer.cisco.com/docs/nso-api-6.5/api-overview/" %}
diff --git a/developer-reference/json-rpc-api.md b/developer-reference/json-rpc-api.md
deleted file mode 100644
index 4fa494a6..00000000
--- a/developer-reference/json-rpc-api.md
+++ /dev/null
@@ -1,10 +0,0 @@
----
-description: API documentation for JSON-RPC API.
-icon: brackets-curly
----
-
-# JSON-RPC API
-
-Visit the link below to learn more.
-
-{% embed url="https://cisco-tailf.gitbook.io/nso-docs/guides/development/advanced-development/web-ui-development/json-rpc-api" %}
diff --git a/developer-reference/netconf-interface.md b/developer-reference/netconf-interface.md
deleted file mode 100644
index a63c7f85..00000000
--- a/developer-reference/netconf-interface.md
+++ /dev/null
@@ -1,14 +0,0 @@
----
-description: Implementation details for NETCONF.
-icon: diagram-project
----
-
-# NETCONF Interface
-
-The NSO NETCONF documentation covers implementation details and extension to or deviation from the NETCONF RFC 6241 and YANG RFC 7950 respectively. The IETF NETCONF and YANG RFCs are the main reference guides for the NSO NETCONF interface, while the NSO documentation complements the RFCs.
-
-{% embed url="https://datatracker.ietf.org/doc/html/rfc6241" %}
-
-{% embed url="https://datatracker.ietf.org/doc/html/rfc7950" %}
-
-{% embed url="https://cisco-tailf.gitbook.io/nso-docs/guides/development/core-concepts/northbound-apis/nso-netconf-server" %}
diff --git a/developer-reference/pyapi/README.md b/developer-reference/pyapi/README.md
deleted file mode 100644
index a1e3cf22..00000000
--- a/developer-reference/pyapi/README.md
+++ /dev/null
@@ -1,28 +0,0 @@
----
-icon: square-p
----
-
-# Python API Reference
-
-Documentation for Python modules, generated from module source:
-
-* [ncs](ncs.md): NCS Python high level module.
-* [ncs.alarm](ncs.alarm.md): NCS Alarm Manager module.
-* [ncs.application](ncs.application.md): Module for building NCS applications.
-* [ncs.cdb](ncs.cdb.md): CDB high level module.
-* [ncs.dp](ncs.dp.md): Callback module for connecting data providers to ConfD/NCS.
-* [ncs.experimental](ncs.experimental.md): Experimental stuff.
-* [ncs.log](ncs.log.md): This module provides some logging utilities.
-* [ncs.maagic](ncs.maagic.md): Confd/NCS data access module.
-* [ncs.maapi](ncs.maapi.md): MAAPI high level module.
-* [ncs.progress](ncs.progress.md): MAAPI progress trace high level module.
-* [ncs.service\_log](ncs.service_log.md): This module provides service logging
-* [ncs.template](ncs.template.md): This module implements classes to simplify template processing.
-* [ncs.util](ncs.util.md): Utility module, low level abstrations
-* [\_ncs](_ncs.md): NCS Python low level module.
-* [\_ncs.cdb](_ncs.cdb.md): Low level module for connecting to NCS built-in XML database (CDB).
-* [\_ncs.dp](_ncs.dp.md): Low level callback module for connecting data providers to NCS.
-* [\_ncs.error](_ncs.error.md): This module defines new NCS Python API exception classes.
-* [\_ncs.events](_ncs.events.md): Low level module for subscribing to NCS event notifications.
-* [\_ncs.ha](_ncs.ha.md): Low level module for connecting to NCS HA subsystem.
-* [\_ncs.maapi](_ncs.maapi.md): Low level module for connecting to NCS with a read/write interface inside transactions.
diff --git a/developer-reference/pyapi/_ncs.cdb.md b/developer-reference/pyapi/_ncs.cdb.md
deleted file mode 100644
index 0da7eae1..00000000
--- a/developer-reference/pyapi/_ncs.cdb.md
+++ /dev/null
@@ -1,905 +0,0 @@
-# \_ncs.cdb Module
-
-Low level module for connecting to NCS built-in XML database (CDB).
-
-This module is used to connect to the NCS built-in XML database, CDB. The purpose of this API is to provide a read and subscription API to CDB.
-
-CDB owns and stores the configuration data and the user of the API wants to read that configuration data and also get notified when someone through either NETCONF, SNMP, the CLI, the Web UI or the MAAPI modifies the data so that the application can re-read the configuration data and act accordingly.
-
-CDB can also store operational data, i.e. data which is designated with a "config false" statement in the YANG data model. Operational data can be both read and written by the applications, but NETCONF and the other northbound agents can only read the operational data.
-
-This documentation should be read together with the [confd\_lib\_cdb(3)](../../resources/man/confd_lib_cdb.3.md) man page.
-
-## Functions
-
-### cd
-
-```python
-cd(sock, path) -> None
-```
-
-Changes the working directory according to the format path. Note that this function can not be used as an existence test.
-
-Keyword arguments:
-
-* sock -- a previously connected CDB socket
-* path -- path to cd to
-
-### close
-
-```python
-close(sock) -> None
-```
-
-Closes the socket. end\_session() should be called before calling this function.
-
-Keyword arguments:
-
-* sock -- a previously connected CDB socket
-
-### connect
-
-```python
-connect(sock, type, ip, port, path) -> None
-```
-
-The application has to connect to NCS before it can interact. There are two different types of connections identified by the type argument - DATA\_SOCKET and SUBSCRIPTION\_SOCKET.
-
-Keyword arguments:
-
-* sock -- a Python socket instance
-* type -- DATA\_SOCKET or SUBSCRIPTION\_SOCKET
-* ip -- the ip address if socket is AF\_INET (optional)
-* port -- the port if socket is AF\_INET (optional)
-* path -- a filename if socket is AF\_UNIX (optional).
-
-### connect\_name
-
-```python
-connect_name(sock, type, name, ip, port, path) -> None
-```
-
-When we use connect() to create a connection to NCS/CDB, the name argument passed to the library initialization function confd\_init() (see [confd\_lib\_lib(3)](../../resources/man/confd_lib_lib.3.md)) is used to identify the connection in status reports and logs. I we want different names to be used for different connections from the same application process, we can use connect\_name() with the wanted name instead of connect().
-
-Keyword arguments:
-
-* sock -- a Python socket instance
-* type -- DATA\_SOCKET or SUBSCRIPTION\_SOCKET
-* name -- the name
-* ip -- the ip address if socket is AF\_INET (optional)
-* port -- the port if socket is AF\_INET (optional)
-* path -- a filename if socket is AF\_UNIX (optional).
-
-### create
-
-```python
-create(sock, path) -> None
-```
-
-Create a new list entry, presence container, or leaf of type empty (unless in a union, if type empty is in a union use set\_elem instead). Note that for list entries and containers, sub-elements will not exist until created or set via some of the other functions, thus doing implicit create via set\_object() or set\_values() may be preferred in this case.
-
-Keyword arguments:
-
-* sock -- a previously connected CDB socket
-* path -- item to create (string)
-
-### cs\_node\_cd
-
-```python
-cs_node_cd(socket, path) -> Union[_ncs.CsNode, None]
-```
-
-Utility function which finds the resulting CsNode given a string keypath.
-
-Does the same thing as \_ncs.cs\_node\_cd(), but can handle paths that are ambiguous due to traversing a mount point, by sending a request to the daemon
-
-Keyword arguments:
-
-* sock -- a previously connected CDB socket
-* path -- the path
-
-### delete
-
-```python
-delete(sock, path) -> None
-```
-
-Delete a list entry, presence container, or leaf of type empty, and all its child elements (if any).
-
-Keyword arguments:
-
-* sock -- a previously connected CDB socket
-* path -- item to delete (string)
-
-### diff\_iterate
-
-```python
-diff_iterate(sock, subid, iter, flags, initstate) -> int
-```
-
-After reading the subscription socket the diff\_iterate() function can be used to iterate over the changes made in CDB data that matched the particular subscription point given by subid.
-
-The user defined function iter() will be called for each element that has been modified and matches the subscription.
-
-This function will return the last return value from the iter() callback.
-
-Keyword arguments:
-
-* sock -- a previously connected CDB socket
-* subid -- the subcscription id
-* iter -- iterator function (see below)
-* initstate -- opaque passed to iter function
-
-The user defined function iter() will be called for each element that has been modified and matches the subscription. It must have the following signature:
-
-```
-iter_fn(kp, op, oldv, newv, state) -> int
-```
-
-Where arguments are:
-
-* kp - a HKeypathRef or None
-* op - the operation
-* oldv - the old value or None
-* newv - the new value or None
-* state - the initstate object
-
-### diff\_iterate\_resume
-
-```python
-diff_iterate_resume(sock, reply, iter, resumestate) -> int
-```
-
-The application must call this function whenever an iterator function has returned ITER\_SUSPEND to finish up the iteration. If the application does not wish to continue iteration it must at least call diff\_iterate\_resume(sock, ITER\_STOP, None, None) to clean up the state. The reply parameter is what the iterator function would have returned (i.e. normally ITER\_RECURSE or ITER\_CONTINUE) if it hadn't returned ITER\_SUSPEND.
-
-This function will return the last return value from the iter() callback.
-
-Keyword arguments:
-
-* sock -- a previously connected CDB socket
-* reply -- the reply value
-* iter -- iterator function (see diff\_iterate())
-* resumestate -- opaque passed to iter function
-
-### end\_session
-
-```python
-end_session(sock) -> None
-```
-
-We use connect() to establish a read socket to CDB. When the socket is closed, the read session is ended. We can reuse the same socket for another read session, but we must then end the session and create another session using start\_session().
-
-Keyword arguments:
-
-* sock -- a previously connected CDB socket
-
-### exists
-
-```python
-exists(sock, path) -> bool
-```
-
-Leafs in the data model may be optional, and presence containers and list entries may or may not exist. This function checks whether a node exists in CDB.
-
-Keyword arguments:
-
-* sock -- a previously connected CDB socket
-* path -- path to check for existence
-
-### get
-
-```python
-get(sock, path) -> _ncs.Value
-```
-
-This reads a a value from the path and returns the result. The path must lead to a leaf element in the XML data tree.
-
-Keyword arguments:
-
-* sock -- a previously connected CDB socket
-* path -- path to leaf
-
-### get\_case
-
-```python
-get_case(sock, choice, path) -> None
-```
-
-When we use the YANG choice statement in the data model, this function can be used to find the currently selected case, avoiding useless get() etc requests for elements that belong to other cases.
-
-Keyword arguments:
-
-* sock -- a previously connected CDB socket
-* choice -- the choice (string)
-* path -- path to container or list entry where choice is defined (string)
-
-### get\_compaction\_info
-
-```python
-get_compaction_info(sock, dbfile) -> dict
-```
-
-Returns the compaction information on the given CDB file.
-
-The return value is a dict of the form:
-
-```
-{
- 'fsize_previous': fsize_previous,
- 'fsize_current': fsize_current,
- 'last_time': last_time,
- 'ntrans': ntrans
-}
-```
-
-In this dict all values are integers.
-
-Keyword arguments:
-
-* sock -- a previously connected CDB socket
-* dbfile -- A\_CDB, O\_CDB or S\_CDB.
-
-### get\_modifications
-
-```python
-get_modifications(sock, subid, flags, path) -> list
-```
-
-The get\_modifications() function can be called after reception of a subscription notification to retrieve all the changes that caused the subscription notification. The socket sock is the subscription socket. The subscription id must also be provided. Optionally a path can be used to limit what is returned further (only changes below the supplied path will be returned), if this isn't needed path can be set to None.
-
-Keyword arguments:
-
-* sock -- a previously connected CDB socket
-* subid -- subscription id
-* flags -- the flags
-* path -- a path in string format or None
-
-### get\_modifications\_cli
-
-```python
-get_modifications_cli(sock, subid, flags) -> str
-```
-
-The get\_modifications\_cli() function can be called after reception of a subscription notification to retrieve all the changes that caused the subscription notification as a string in Cisco CLI format. The socket sock is the subscription socket. The subscription id must also be provided.
-
-Keyword arguments:
-
-* sock -- a previously connected CDB socket
-* subid -- subscription id
-* flags -- the flags
-
-### get\_modifications\_iter
-
-```python
-get_modifications_iter(sock, flags) -> list
-```
-
-The get\_modifications\_iter() is basically a convenient short-hand of the get\_modifications() function intended to be used from within a iteration function started by diff\_iterate(). In this case no subscription id is needed, and the path is implicitly the current position in the iteration.
-
-Keyword arguments:
-
-* sock -- a previously connected CDB socket
-* flags -- the flags
-
-### get\_object
-
-```python
-get_object(sock, n, path) -> list
-```
-
-This function reads at most n values from the container or list entry specified by the path, and returns them as a list of Value's.
-
-Keyword arguments:
-
-* sock -- a previously connected CDB socket
-* n -- max number of values to read
-* path -- path to a list entry or a container (string)
-
-### get\_objects
-
-```python
-get_objects(sock, n, ix, nobj, path) -> list
-```
-
-Similar to get\_object(), but reads multiple entries of a list based on the "instance integer" otherwise given within square brackets in the path - here the path must specify the list without the instance integer. At most n values from each of nobj entries, starting at entry ix, are read and placed in the values array. The return value is a list of objects where each object is represented as a list of Values.
-
-Keyword arguments:
-
-* sock -- a previously connected CDB socket
-* n -- max number of values to read from each object
-* ix -- start index
-* nobj -- number of objects to read
-* path -- path to a list entry or a container (string)
-
-### get\_phase
-
-```python
-get_phase(sock) -> dict
-```
-
-Returns the start-phase that CDB is currently in. The return value is a dict of the form:
-
-```
-{
- 'phase': phase,
- 'flags': flags,
- 'init': init,
- 'upgrade': upgrade
-}
-```
-
-In this dict 'phase' and 'flags' are integers, while 'init' and 'upgrade' are booleans.
-
-Keyword arguments:
-
-* sock -- a previously connected CDB socket
-
-### get\_replay\_txids
-
-```python
-get_replay_txids(sock) -> List[Tuple]
-```
-
-When the subscriptionReplay functionality is enabled in confd.conf this function returns the list of available transactions that CDB can replay. The current transaction id will be the first in the list, the second at txid\[1] and so on. In case there are no replay transactions available (the feature isn't enabled or there hasn't been any transactions yet) only one (the current) transaction id is returned.
-
-The returned list contains tuples with the form (s1, s2, s3, primary) where s1, s2 and s3 are unsigned integers and primary is either a string or None.
-
-Keyword arguments:
-
-* sock -- a previously connected CDB socket
-
-### get\_transaction\_handle
-
-```python
-get_transaction_handle(sock) -> int
-```
-
-Returns the transaction handle for the transaction that triggered the current subscription notification. This function uses a subscription socket, and can only be called when a subscription notification for configuration data has been received on that socket, before sync\_subscription\_socket() has been called. Additionally, it is not possible to call this function from the iter() function passed to diff\_iterate().
-
-Note:
-
-> A CDB client is not expected to access the ConfD transaction store directly - this function should only be used for logging or debugging purposes.
-
-Note:
-
-> When the ConfD High Availability functionality is used, the transaction information is not available on secondary nodes.
-
-Keyword arguments:
-
-* sock -- a previously connected CDB socket
-
-### get\_txid
-
-```python
-get_txid(sock) -> tuple
-```
-
-Read the last transaction id from CDB. This function can be used if we are forced to reconnect to CDB. If the transaction id we read is identical to the last id we had prior to loosing the CDB sockets we don't have to reload our managed object data. See the User Guide for full explanation.
-
-The returned tuple has the form (s1, s2, s3, primary) where s1, s2 and s3 are unsigned integers and primary is either a string or None.
-
-Keyword arguments:
-
-* sock -- a previously connected CDB socket
-
-### get\_user\_session
-
-```python
-get_user_session(sock) -> int
-```
-
-Returns the user session id for the transaction that triggered the current subscription notification. This function uses a subscription socket, and can only be called when a subscription notification for configuration data has been received on that socket, before sync\_subscription\_socket() has been called. Additionally, it is not possible to call this function from the iter() function passed to diff\_iterate(). To retrieve full information about the user session, use \_maapi.get\_user\_session() (see [confd\_lib\_maapi(3)](../../resources/man/confd_lib_maapi.3.md)).
-
-Note:
-
-> When the ConfD High Availability functionality is used, the user session information is not available on secondary nodes.
-
-Keyword arguments:
-
-* sock -- a previously connected CDB socket
-
-### get\_values
-
-```python
-get_values(sock, values, path) -> list
-```
-
-Read an arbitrary set of sub-elements of a container or list entry. The values list must be pre-populated with a number of TagValue instances.
-
-TagValues passed in the values list will be updated with the corresponding values read and a new values list will be returned.
-
-Keyword arguments:
-
-* sock -- a previously connected CDB socket
-* values -- a list of TagValue instances
-* path -- path to a list entry or a container (string)
-
-### getcwd
-
-```python
-getcwd(sock) -> str
-```
-
-Returns the current position as previously set by cd(), pushd(), or popd() as a string path. Note that what is returned is a pretty-printed version of the internal representation of the current position. It will be the shortest unique way to print the path but it might not exactly match the string given to cd().
-
-Keyword arguments:
-
-* sock -- a previously connected CDB socket
-
-### getcwd\_kpath
-
-```python
-getcwd_kpath(sock) -> _ncs.HKeypathRef
-```
-
-Returns the current position like getcwd(), but as a HKeypathRef instead of as a string.
-
-Keyword arguments:
-
-* sock -- a previously connected CDB socket
-
-### index
-
-```python
-index(sock, path) -> int
-```
-
-Given a path to a list entry index() returns its position (starting from 0).
-
-Keyword arguments:
-
-* sock -- a previously connected CDB socket
-* path -- path to list entry
-
-### initiate\_journal\_compaction
-
-```python
-initiate_journal_compaction(sock) -> None
-```
-
-Normally CDB handles journal compaction of the config datastore automatically. If this has been turned off (in the configuration file) then the A.cdb file will grow indefinitely unless this API function is called periodically to initiate compaction. This function initiates a compaction and returns immediately (if the datastore is locked, the compaction will be delayed, but eventually compaction will take place). Calling this function when journal compaction is configured to be automatic has no effect.
-
-Keyword arguments:
-
-* sock -- a previously connected CDB socket
-
-### initiate\_journal\_dbfile\_compaction
-
-```python
-initiate_journal_dbfile_compaction(sock, dbfile) -> None
-```
-
-Similar to initiate\_journal\_compaction() but initiates the compaction on the given CDB file instead of all CDB files.
-
-Keyword arguments:
-
-* sock -- a previously connected CDB socket
-* dbfile -- A\_CDB, O\_CDB or S\_CDB.
-
-### is\_default
-
-```python
-is_default(sock, path) -> bool
-```
-
-This function returns True for a leaf which has a default value defined in the data model when no value has been set, i.e. when the default value is in effect. It returns False for other existing leafs. There is normally no need to call this function, since CDB automatically provides the default value as needed when get() etc is called.
-
-Keyword arguments:
-
-* sock -- a previously connected CDB socket
-* path -- path to leaf
-
-### mandatory\_subscriber
-
-```python
-mandatory_subscriber(sock, name) -> None
-```
-
-Attaches a mandatory attribute and a mandatory name to the subscriber identified by sock. The name argument is distinct from the name argument in connect\_name().
-
-Keyword arguments:
-
-* sock -- a previously connected CDB socket
-* name -- the name
-
-### next\_index
-
-```python
-next_index(sock, path) -> int
-```
-
-Given a path to a list entry next\_index() returns the position (starting from 0) of the next entry (regardless of whether the path exists or not).
-
-Keyword arguments:
-
-* sock -- a previously connected CDB socket
-* path -- path to list entry
-
-### num\_instances
-
-```python
-num_instances(sock, path) -> int
-```
-
-Returns the number of instances in a list.
-
-Keyword arguments:
-
-* sock -- a previously connected CDB socket
-* path -- path to list node
-
-### oper\_subscribe
-
-```python
-oper_subscribe(sock, nspace, path) -> int
-```
-
-Sets up a CDB subscription for changes in the operational database. Similar to the subscriptions for configuration data, we can be notified of changes to the operational data stored in CDB. Note that there are several differences from the subscriptions for configuration data.
-
-Keyword arguments:
-
-* sock -- a previously connected CDB socket
-* nspace -- the namespace hash
-* path -- path to node
-
-### popd
-
-```python
-popd(sock) -> None
-```
-
-Pops the top element from the directory stack and changes directory to previous directory.
-
-Keyword arguments:
-
-* sock -- a previously connected CDB socket
-
-### pushd
-
-```python
-pushd(sock, path) -> None
-```
-
-Similar to cd() but pushes the previous current directory on a stack.
-
-Keyword arguments:
-
-* sock -- a previously connected CDB socket
-* path -- path to cd to
-
-### read\_subscription\_socket
-
-```python
-read_subscription_socket(sock) -> list
-```
-
-This call will return a list of integer values containing subscription points earlier acquired through calls to subscribe().
-
-Keyword arguments:
-
-* sock -- a previously connected CDB socket
-
-### read\_subscription\_socket2
-
-```python
-read_subscription_socket2(sock) -> tuple
-```
-
-Another version of read\_subscription\_socket() which will return a 3-tuple in the form (type, flags, subpoints).
-
-Keyword arguments:
-
-* sock -- a previously connected CDB socket
-
-### replay\_subscriptions
-
-```python
-replay_subscriptions(sock, txid, sub_points) -> None
-```
-
-This function makes it possible to replay the subscription events for the last configuration change to some or all CDB subscribers. This call is useful in a number of recovery scenarios, where some CDB subscribers lost connection to ConfD before having received all the changes in a transaction. The replay functionality is only available if it has been enabled in confd.conf.
-
-Keyword arguments:
-
-* sock -- a previously connected CDB socket
-* txid -- a 4-tuple of the form (s1, s2, s3, primary)
-* sub\_points -- a list of subscription points
-
-### set\_case
-
-```python
-set_case(sock, choice, scase, path) -> None
-```
-
-When we use the YANG choice statement in the data model, this function can be used to select the current case.
-
-Keyword arguments:
-
-* sock -- a previously connected CDB socket
-* choice -- the choice (string)
-* scase -- the case (string)
-* path -- path to container or list entry where choice is defined (string)
-
-### set\_elem
-
-```python
-set_elem(sock, value, path) -> None
-```
-
-Set the value of a single leaf. The value may be either a Value instance or a string.
-
-Keyword arguments:
-
-* sock -- a previously connected CDB socket
-* value -- the value to set
-* path -- a string pointing to a single leaf
-
-### set\_namespace
-
-```python
-set_namespace(sock, hashed_ns) -> None
-```
-
-If we want to access data in CDB where the toplevel element name is not unique, we need to set the namespace. We are reading data related to a specific .fxs file. confdc can be used to generate a .py file with a class for the namespace, by the flag --emit-python to confdc (see confdc(1)).
-
-Keyword arguments:
-
-* sock -- a previously connected CDB socket
-* hashed\_ns -- the namespace hash
-
-### set\_object
-
-```python
-set_object(sock, values, path) -> None
-```
-
-Set all elements corresponding to the complete contents of a container or list entry, except for sub-lists.
-
-Keyword arguments:
-
-* sock -- a previously connected CDB socket
-* values -- a list of Value:s
-* path -- path to container or list entry (string)
-
-### set\_timeout
-
-```python
-set_timeout(sock, timeout_secs) -> None
-```
-
-A timeout for client actions can be specified via /confdConfig/cdb/clientTimeout in confd.conf, see the confd.conf(5) manual page. This function can be used to dynamically extend (or shorten) the timeout for the current action. Thus it is possible to configure a restrictive timeout in confd.conf, but still allow specific actions to have a longer execution time.
-
-Keyword arguments:
-
-* sock -- a previously connected CDB socket
-* timeout\_secs -- timeout in seconds
-
-### set\_values
-
-```python
-set_values(sock, values, path) -> None
-```
-
-Set arbitrary sub-elements of a container or list entry.
-
-Keyword arguments:
-
-* sock -- a previously connected CDB socket
-* values -- a list of TagValue:s
-* path -- path to container or list entry (string)
-
-### start\_session
-
-```python
-start_session(sock, db) -> None
-```
-
-Starts a new session on an already established socket to CDB. The db parameter should be one of RUNNING, PRE\_COMMIT\_RUNNING, STARTUP and OPERATIONAL.
-
-Keyword arguments:
-
-* sock -- a previously connected CDB socket
-* db -- the database
-
-### start\_session2
-
-```python
-start_session2(sock, db, flags) -> None
-```
-
-This function may be used instead of start\_session() if it is considered necessary to have more detailed control over some aspects of the CDB session - if in doubt, use start\_session() instead. The sock and db arguments are the same as for start\_session(), and these values can be used for flags (ORed together if more than one).
-
-Keyword arguments:
-
-* sock -- a previously connected CDB socket
-* db -- the database
-* flags -- the flags
-
-### sub\_abort\_trans
-
-```python
-sub_abort_trans(sock, code, apptag_ns, apptag_tag, reason) -> None
-```
-
-This function is to be called instead of sync\_subscription\_socket() when the subscriber wishes to abort the current transaction. It is only valid to call after read\_subscription\_socket2() has returned with type set to CDB\_SUB\_PREPARE. The arguments after sock are the same as to X\_seterr\_extended() and give the caller a way of indicating the reason for the failure.
-
-Keyword arguments:
-
-* sock -- a previously connected CDB socket
-* code -- the error code
-* apptag\_ns -- the namespace hash
-* apptag\_tag -- the tag hash
-* reason -- reason string
-
-### sub\_abort\_trans\_info
-
-```python
-sub_abort_trans_info(sock, code, apptag_ns, apptag_tag, error_info, reason) -> None
-```
-
-Same a sub\_abort\_trans() but also fills in the NETCONF element.
-
-Keyword arguments:
-
-* sock -- a previously connected CDB socket
-* code -- the error code
-* apptag\_ns -- the namespace hash
-* apptag\_tag -- the tag hash
-* error\_info -- a list of TagValue instances
-* reason -- reason string
-
-### sub\_progress
-
-```python
-sub_progress(sock, msg) -> None
-```
-
-After receiving a subscription notification (using read\_subscription\_socket()) but before acknowledging it (or aborting, in the case of prepare subscriptions), it is possible to send progress reports back to ConfD using the sub\_progress() function.
-
-Keyword arguments:
-
-* sock -- a previously connected CDB socket
-* msg -- the message
-
-### subscribe
-
-```python
-subscribe(sock, prio, nspace, path) -> int
-```
-
-Sets up a CDB subscription so that we are notified when CDB configuration data changes. There can be multiple subscription points from different sources, that is a single client daemon can have many subscriptions and there can be many client daemons. The return value is a subscription point used to identify this particular subscription.
-
-Keyword arguments:
-
-* sock -- a previously connected CDB socket
-* prio -- priority
-* nspace -- the namespace hash
-* path -- path to node
-
-### subscribe2
-
-```python
-subscribe2(sock, type, flags, prio, nspace, path) -> int
-```
-
-This function supersedes the current subscribe() and oper\_subscribe() as well as makes it possible to use the new two phase subscription method. Operational and configuration subscriptions can be done on the same socket, but in that case the notifications may be arbitrarily interleaved, including operational notifications arriving between different configuration notifications for the same transaction. If this is a problem, use separate sockets for operational and configuration subscriptions.
-
-Keyword arguments:
-
-* sock -- a previously connected CDB socket
-* type -- subscription type
-* flags -- flags
-* prio -- priority
-* nspace -- the namespace hash
-* path -- path to node
-
-### subscribe\_done
-
-```python
-subscribe_done(sock) -> None
-```
-
-When a client is done registering all its subscriptions on a particular subscription socket it must call subscribe\_done(). No notifications will be delivered until then.
-
-Keyword arguments:
-
-* sock -- a previously connected CDB socket
-
-### sync\_subscription\_socket
-
-```python
-sync_subscription_socket(sock, st) -> None
-```
-
-Once we have read the subscription notification through a call to read\_subscription\_socket() and optionally used the diff\_iterate() to iterate through the changes as well as acted on the changes to CDB, we must synchronize with CDB so that CDB can continue and deliver further subscription messages to subscribers with higher priority numbers.
-
-Keyword arguments:
-
-* sock -- a previously connected CDB socket
-* st -- sync type (int)
-
-### trigger\_oper\_subscriptions
-
-```python
-trigger_oper_subscriptions(sock, sub_points, flags) -> None
-```
-
-This function works like trigger\_subscriptions(), but for CDB subscriptions to operational data. The caller will trigger all subscription points passed in the sub\_points list (or all operational data subscribers if the list is empty), and the call will not return until the last subscriber has called sync\_subscription\_socket().
-
-Keyword arguments:
-
-* sock -- a previously connected CDB socket
-* sub\_points -- a list of subscription points
-* flags -- the flags
-
-### trigger\_subscriptions
-
-```python
-trigger_subscriptions(sock, sub_points) -> None
-```
-
-This function makes it possible to trigger CDB subscriptions for configuration data even though the configuration has not been modified. The caller will trigger all subscription points passed in the sub\_points list (or all subscribers if the list is empty) in priority order, and the call will not return until the last subscriber has called sync\_subscription\_socket().
-
-Keyword arguments:
-
-* sock -- a previously connected CDB socket
-* sub\_points -- a list of subscription points
-
-### wait\_start
-
-```python
-wait_start(sock) -> None
-```
-
-This call waits until CDB has completed start-phase 1 and is available, when it is CONFD\_OK is returned. If CDB already is available (i.e. start-phase >= 1) the call returns immediately. This can be used by a CDB client who is not synchronously started and only wants to wait until it can read its configuration. The call can be used after connect().
-
-Keyword arguments:
-
-* sock -- a previously connected CDB socket
-
-## Predefined Values
-
-```python
-
-A_CDB = 1
-DATA_SOCKET = 2
-DONE_OPERATIONAL = 4
-DONE_PRIORITY = 1
-DONE_SOCKET = 2
-DONE_TRANSACTION = 3
-FLAG_INIT = 1
-FLAG_UPGRADE = 2
-GET_MODS_CLI_NO_BACKQUOTES = 8
-GET_MODS_INCLUDE_LISTS = 1
-GET_MODS_INCLUDE_MOVES = 16
-GET_MODS_REVERSE = 2
-GET_MODS_SUPPRESS_DEFAULTS = 4
-GET_MODS_WANT_ANCESTOR_DELETE = 32
-LOCK_PARTIAL = 8
-LOCK_REQUEST = 4
-LOCK_SESSION = 2
-LOCK_WAIT = 1
-OPERATIONAL = 3
-O_CDB = 2
-PRE_COMMIT_RUNNING = 4
-READ_COMMITTED = 16
-READ_SOCKET = 0
-RUNNING = 1
-STARTUP = 2
-SUBSCRIPTION_SOCKET = 1
-SUB_ABORT = 3
-SUB_COMMIT = 2
-SUB_FLAG_HA_IS_SECONDARY = 16
-SUB_FLAG_HA_IS_SLAVE = 16
-SUB_FLAG_HA_SYNC = 8
-SUB_FLAG_IS_LAST = 1
-SUB_FLAG_REVERT = 4
-SUB_FLAG_TRIGGER = 2
-SUB_OPER = 4
-SUB_OPERATIONAL = 3
-SUB_PREPARE = 1
-SUB_RUNNING = 1
-SUB_RUNNING_TWOPHASE = 2
-SUB_WANT_ABORT_ON_ABORT = 1
-S_CDB = 3
-```
diff --git a/developer-reference/pyapi/_ncs.dp.md b/developer-reference/pyapi/_ncs.dp.md
deleted file mode 100644
index 4428cb63..00000000
--- a/developer-reference/pyapi/_ncs.dp.md
+++ /dev/null
@@ -1,2103 +0,0 @@
-# \_ncs.dp Module
-
-Low level callback module for connecting data providers to NCS.
-
-This module is used to connect to the NCS Data Provider API. The purpose of this API is to provide callback hooks so that user-written data providers can provide data stored externally to NCS. NCS needs this information in order to drive its northbound agents.
-
-The module is also used to populate items in the data model which are not data or configuration items, such as statistics items from the device.
-
-The module consists of a number of API functions whose purpose is to install different callback functions at different points in the data model tree which is the representation of the device configuration. Read more about callpoints in tailf\_yang\_extensions(5). Read more about how to use the module in the User Guide chapters on Operational data and External data.
-
-This documentation should be read together with the [confd\_lib\_dp(3)](../../resources/man/confd_lib_dp.3.md) man page.
-
-## Functions
-
-### aaa\_reload
-
-```python
-aaa_reload(tctx) -> None
-```
-
-When the ConfD AAA tree is populated by an external data provider (see the AAA chapter in the User Guide), this function can be used by the data provider to notify ConfD when there is a change to the AAA data.
-
-Keyword arguments:
-
-* tctx -- a transaction context
-
-### access\_reply\_result
-
-```python
-access_reply_result(actx, result) -> None
-```
-
-The callbacks must call this function to report the result of the access check to ConfD, and should normally return CONFD\_OK. If any other value is returned, it will cause the access check to be rejected.
-
-Keyword arguments:
-
-* actx -- the authorization context
-* result -- the result (ACCESS\_RESULT\_xxx)
-
-### action\_delayed\_reply\_error
-
-```python
-action_delayed_reply_error(uinfo, errstr) -> None
-```
-
-If we use the CONFD\_DELAYED\_RESPONSE as a return value from the action callback, we must later asynchronously reply. This function is used to reply with error.
-
-Keyword arguments:
-
-* uinfo -- a user info context
-* errstr -- an error string
-
-### action\_delayed\_reply\_ok
-
-```python
-action_delayed_reply_ok(uinfo) -> None
-```
-
-If we use the CONFD\_DELAYED\_RESPONSE as a return value from the action callback, we must later asynchronously reply. This function is used to reply with success.
-
-Keyword arguments:
-
-* uinfo -- a user info context
-
-### action\_reply\_command
-
-```python
-action_reply_command(uinfo, values) -> None
-```
-
-If a CLI callback command should return data, it must invoke this function in response to the cb\_command() callback.
-
-Keyword arguments:
-
-* uinfo -- a user info context
-* values -- a list of strings or None
-
-### action\_reply\_completion
-
-```python
-action_reply_completion(uinfo, values) -> None
-```
-
-This function must normally be called in response to the cb\_completion() callback.
-
-Keyword arguments:
-
-* uinfo -- a user info context
-* values -- a list of 3-tuples or None (see below)
-
-The values argument must be None or a list of 3-tuples where each tuple is built up like:
-
-```
-(type::int, value::string, extra::string)
-```
-
-The third item of the tuple (extra) may be set to None.
-
-### action\_reply\_range\_enum
-
-```python
-action_reply_range_enum(uinfo, values, keysize) -> None
-```
-
-This function must be called in response to the cb\_completion() callback when it is invoked via a tailf:cli-custom-range-enumerator statement in the data model.
-
-Keyword arguments:
-
-* uinfo -- a user info context
-* values -- a list of keys as strings or None
-* keysize -- number of keys for the list in the data model
-
-The values argument is a flat list of keys. If the list in the data model specifies multiple keys this list is still flat. The keysize argument tells us how many keys to use for each list element. So the size of values should be a multiple of keysize.
-
-### action\_reply\_rewrite
-
-```python
-action_reply_rewrite(uinfo, values, unhides) -> None
-```
-
-This function can be called instead of action\_reply\_command() as a response to a show path rewrite callback invocation.
-
-Keyword arguments:
-
-* uinfo -- a user info context
-* values -- a list of strings or None
-* unhides -- a list of strings or None
-
-### action\_reply\_rewrite2
-
-```python
-action_reply_rewrite2(uinfo, values, unhides, selects) -> None
-```
-
-This function can be called instead of action\_reply\_command() as a response to a show path rewrite callback invocation.
-
-Keyword arguments:
-
-* uinfo -- a user info context
-* values -- a list of strings or None
-* unhides -- a list of strings or None
-* selects -- a list of strings or None
-
-### action\_reply\_values
-
-```python
-action_reply_values(uinfo, values) -> None
-```
-
-If the action definition specifies that the action should return data, it must invoke this function in response to the cb\_action() callback.
-
-Keyword arguments:
-
-* uinfo -- a user info context
-* values -- a list of \_lib.TagValue instances or None
-
-### action\_set\_fd
-
-```python
-action_set_fd(uinfo, sock) -> None
-```
-
-Associate a worker socket with the action. This function must be called in the action cb\_init() callback.
-
-Keyword arguments:
-
-* uinfo -- a user info context
-* sock -- a previously connected worker socket
-
-A typical implementation of an action cb\_init() callback looks like:
-
-```
-class ActionCallbacks(object):
- def __init__(self, workersock):
- self.workersock = workersock
-
- def cb_init(self, uinfo):
- dp.action_set_fd(uinfo, self.workersock)
-```
-
-### action\_set\_timeout
-
-```python
-action_set_timeout(uinfo, timeout_secs) -> None
-```
-
-Some action callbacks may require a significantly longer execution time than others, and this time may not even be possible to determine statically (e.g. a file download). In such cases the /confdConfig/capi/queryTimeout setting in confd.conf may be insufficient, and this function can be used to extend (or shorten) the timeout for the current callback invocation. The timeout is given in seconds from the point in time when the function is called.
-
-Keyword arguments:
-
-* uinfo -- a user info context
-* timeout\_secs -- timeout value
-
-### action\_seterr
-
-```python
-action_seterr(uinfo, errstr) -> None
-```
-
-If action callback encounters fatal problems that can not be expressed via the reply function, it may call this function with an appropriate message and return CONFD\_ERR instead of CONFD\_OK.
-
-Keyword arguments:
-
-* uinfo -- a user info context
-* errstr -- an error message string
-
-### action\_seterr\_extended
-
-```python
-action_seterr_extended(uninfo, code, apptag_ns, apptag_tag, errstr) -> None
-```
-
-This function can be used to provide more structured error information from an action callback.
-
-Keyword arguments:
-
-* uinfo -- a user info context
-* code -- an error code
-* apptag\_ns -- namespace - should be set to 0
-* apptag\_tag -- either 0 or the hash value for a data model node
-* errstr -- an error message string
-
-### action\_seterr\_extended\_info
-
-```python
-action_seterr_extended_info(uinfo, code, apptag_ns, apptag_tag,
- error_info, errstr) -> None
-```
-
-This function can be used to provide structured error information in the same way as action\_seterr\_extended(), and additionally provide contents for the NETCONF element.
-
-Keyword arguments:
-
-* uinfo -- a user info context
-* code -- an error code
-* apptag\_ns -- namespace - should be set to 0
-* apptag\_tag -- either 0 or the hash value for a data model node
-* error\_info -- a list of \_lib.TagValue instances
-* errstr -- an error message string
-
-### auth\_seterr
-
-```python
-auth_seterr(actx, errstr) -> None
-```
-
-This function is used by the application to set an error string.
-
-This function can be used to provide a text message when the callback returns CONFD\_ERR. If used when rejecting a successful authentication, the message will be logged in ConfD's audit log (otherwise a generic "rejected by application callback" message is logged).
-
-Keyword arguments:
-
-* actx -- the auth context
-* errstr -- an error message string
-
-### authorization\_set\_timeout
-
-```python
-authorization_set_timeout(actx, timeout_secs) -> None
-```
-
-The authorization callbacks are invoked on the daemon control socket, and as such are expected to complete quickly. However in case they send requests to a remote server, and such a request needs to be retried, this function can be used to extend the timeout for the current callback invocation. The timeout is given in seconds from the point in time when the function is called.
-
-Keyword arguments:
-
-* actx -- the authorization context
-* timeout\_secs -- timeout value
-
-### connect
-
-```python
-connect(dx, sock, type, ip, port, path) -> None
-```
-
-Connects to the ConfD daemon. The socket instance provided via the 'sock' argument must be kept alive during the lifetime of the daemon context.
-
-Keyword arguments:
-
-* dx -- a daemon context acquired through a call to init\_daemon()
-* sock -- a Python socket instance
-* type -- the socket type (CONTROL\_SOCKET or WORKER\_SOCKET)
-* ip -- the ip address if socket is AF\_INET (optional)
-* port -- the port if socket is AF\_INET (optional)
-* path -- a filename if socket is AF\_UNIX (optional).
-
-### data\_get\_list\_filter
-
-```python
-data_get_list_filter(tctx) -> ListFilter
-```
-
-Get list filter from transaction context.
-
-Keyword arguments:
-
-* tctx -- a transaction context
-
-### data\_reply\_attrs
-
-```python
-data_reply_attrs(tctx, attrs) -> None
-```
-
-This function is used by the cb\_get\_attrs() callback to return the requested attribute values.
-
-Keyword arguments:
-
-* tctx -- a transaction context
-* attrs -- a list of \_lib.AttrValue instances
-
-### data\_reply\_found
-
-```python
-data_reply_found(tctx) -> None
-```
-
-This function is used by the cb\_exists\_optional() callback to indicate to ConfD that a node does exist.
-
-Keyword arguments:
-
-* tctx -- a transaction context
-
-### data\_reply\_next\_key
-
-```python
-data_reply_next_key(tctx, keys, next) -> None
-```
-
-This function is used by the cb\_get\_next() and cb\_find\_next() callbacks to return the next key.
-
-Keyword arguments:
-
-* tctx -- a transaction context
-* keys -- a list of keys of \_lib.Value for a list item (se below)
-* next -- int value passed to the next invocation of cb\_get\_next() callback
-
-A list may have mutiple key leafs specified in the data model. This is why the keys argument must be a list.
-
-### data\_reply\_next\_object\_array
-
-```python
-data_reply_next_object_array(tctx, v, next) -> None
-```
-
-This function is used by the optional cb\_get\_next\_object() and cb\_find\_next\_object() callbacks to return an entire object including its keys. It combines the functions of data\_reply\_next\_key() and data\_reply\_value\_array().
-
-Keyword arguments:
-
-* tctx -- a transaction context
-* v -- a list of \_lib.Value instances
-* next -- int value passed to the next invocation of cb\_get\_next() callback
-
-### data\_reply\_next\_object\_arrays
-
-```python
-data_reply_next_object_arrays(tctx, objs, timeout_millisecs) -> None
-```
-
-This function is used by the optional cb\_get\_next\_object() and cb\_find\_next\_object() callbacks to return multiple objects including their keys, in \_lib.Value form.
-
-Keyword arguments:
-
-* tctx -- a transaction context
-* objs -- a list of tuples or None (see below)
-* timeout\_millisecs -- timeout value for ConfD's caching of returned data
-
-The format of argument objs is list(tuple(list(\_lib.Value), long)), or None to indicate end of list. Another way to indicate end of list is to include None as the first item in the 2-tuple last in the list.
-
-E.g.:
-
-```
-V = _lib.Value
-objs = [
- ( [ V(1), V(2) ], next1 ),
- ( [ V(3), V(4) ], next2 ),
- ( None, -1 )
- ]
-```
-
-### data\_reply\_next\_object\_tag\_value\_array
-
-```python
-data_reply_next_object_tag_value_array(tctx, tvs, next) -> None
-```
-
-This function is used by the optional cb\_get\_next\_object() and cb\_find\_next\_object() callbacks to return an entire object including its keys
-
-Keyword arguments:
-
-* tctx -- a transaction context
-* tvs -- a list of \_lib.TagValue instances or None
-* next -- int value passed to the next invocation of cb\_get\_next\_object() callback
-
-### data\_reply\_next\_object\_tag\_value\_arrays
-
-```python
-data_reply_next_object_tag_value_arrays(tctx, objs, timeout_millisecs) -> None
-```
-
-This function is used by the optional cb\_get\_next\_object() and cb\_find\_next\_object() callbacks to return multiple objects including their keys.
-
-Keyword arguments:
-
-* tctx -- a transaction context
-* objs -- a list of tuples or None (see below)
-* timeout\_millisecs -- timeout value for ConfD's caching of returned data
-
-The format of argument objs is list(tuple(list(\_lib.TagValue), long)) or None to indicate end of list. Another way to indicate end of list is to include None as the first item in the 2-tuple last in the list.
-
-E.g.:
-
-```
-objs = [
- ( [ tagval1, tagval2 ], next1 ),
- ( [ tagval3, tagval4, tagval5 ], next2 ),
- ( None, -1 )
- ]
-```
-
-### data\_reply\_not\_found
-
-```python
-data_reply_not_found(tctx) -> None
-```
-
-This function is used by the cb\_get\_elem() and cb\_exists\_optional() callbacks to indicate to ConfD that a list entry or node does not exist.
-
-Keyword arguments:
-
-* tctx -- a transaction context
-
-### data\_reply\_tag\_value\_array
-
-```python
-data_reply_tag_value_array(tctx, tvs) -> None
-```
-
-This function is used to return an array of values, corresponding to a complete list entry, to ConfD. It can be used by the optional cb\_get\_object() callback.
-
-Keyword arguments:
-
-* tctx -- a transaction context
-* tvs -- a list of \_lib.TagValue instances or None
-
-### data\_reply\_value
-
-```python
-data_reply_value(tctx, v) -> None
-```
-
-This function is used to return a single data item to ConfD.
-
-Keyword arguments:
-
-* tctx -- a transaction context
-* v -- a \_lib.Value instance
-
-### data\_reply\_value\_array
-
-```python
-data_reply_value_array(tctx, vs) -> None
-```
-
-This function is used to return an array of values, corresponding to a complete list entry, to ConfD. It can be used by the optional cb\_get\_object() callback.
-
-Keyword arguments:
-
-* tctx -- a transaction context
-* vs -- a list of \_lib.Value instances
-
-### data\_set\_timeout
-
-```python
-data_set_timeout(tctx, timeout_secs) -> None
-```
-
-A data callback should normally complete quickly, since e.g. the execution of a 'show' command in the CLI may require many data callback invocations. In some rare cases it may still be necessary for a data callback to have a longer execution time, and then this function can be used to extend (or shorten) the timeout for the current callback invocation. The timeout is given in seconds from the point in time when the function is called.
-
-Keyword arguments:
-
-* tctx -- a transaction context
-* timeout\_secs -- timeout value
-
-### db\_set\_timeout
-
-```python
-db_set_timeout(dbx, timeout_secs) -> None
-```
-
-Some of the DB callbacks registered via register\_db\_cb(), e.g. cb\_copy\_running\_to\_startup(), may require a longer execution time than others. This function can be used to extend the timeout for the current callback invocation. The timeout is given in seconds from the point in time when the function is called.
-
-Keyword arguments:
-
-* dbx -- a db context of DbCtxRef
-* timeout\_secs -- timeout value
-
-### db\_seterr
-
-```python
-db_seterr(dbx, errstr) -> None
-```
-
-This function is used by the application to set an error string.
-
-Keyword arguments:
-
-* dbx -- a db context
-* errstr -- an error message string
-
-### db\_seterr\_extended
-
-```python
-db_seterr_extended(dbx, code, apptag_ns, apptag_tag, errstr) -> None
-```
-
-This function can be used to provide more structured error information from a db callback.
-
-Keyword arguments:
-
-* dbx -- a db context
-* code -- an error code
-* apptag\_ns -- namespace - should be set to 0
-* apptag\_tag -- either 0 or the hash value for a data model node
-* errstr -- an error message string
-
-### db\_seterr\_extended\_info
-
-```python
-db_seterr_extended_info(dbx, code, apptag_ns, apptag_tag,
- error_info, errstr) -> None
-```
-
-This function can be used to provide structured error information in the same way as db\_seterr\_extended(), and additionally provide contents for the NETCONF element.
-
-Keyword arguments:
-
-* dbx -- a db context
-* code -- an error code
-* apptag\_ns -- namespace - should be set to 0
-* apptag\_tag -- either 0 or the hash value for a data model node
-* error\_info -- a list of \_lib.TagValue instances
-* errstr -- an error message string
-
-### delayed\_reply\_error
-
-```python
-delayed_reply_error(tctx, errstr) -> None
-```
-
-This function must be used to return an error when tha actual callback returned CONFD\_DELAYED\_RESPONSE.
-
-Keyword arguments:
-
-* tctx -- a transaction context
-* errstr -- an error message string
-
-### delayed\_reply\_ok
-
-```python
-delayed_reply_ok(tctx) -> None
-```
-
-This function must be used to return the equivalent of CONFD\_OK when the actual callback returned CONFD\_DELAYED\_RESPONSE.
-
-Keyword arguments:
-
-* tctx -- a transaction context
-
-### delayed\_reply\_validation\_warn
-
-```python
-delayed_reply_validation_warn(tctx) -> None
-```
-
-This function must be used to return the equivalent of CONFD\_VALIDATION\_WARN when the cb\_validate() callback returned CONFD\_DELAYED\_RESPONSE.
-
-Keyword arguments:
-
-* tctx -- a transaction context
-
-### error\_seterr
-
-```python
-error_seterr(uinfo, errstr) -> None
-```
-
-This function must be called by format\_error() (above) to provide a replacement for the default error message. If format\_error() is called without calling error\_seterr() the default message will be used.
-
-Keyword arguments:
-
-* uinfo -- a user info context
-* errstr -- an string describing the error
-
-### fd\_ready
-
-```python
-fd_ready(dx, sock) -> None
-```
-
-The database application owns all data provider sockets to ConfD and is responsible for the polling of these sockets. When one of the ConfD sockets has I/O ready to read, the application must invoke fd\_ready() on the socket.
-
-Keyword arguments:
-
-* dx -- a daemon context acquired through a call to init\_daemon()
-* sock -- the socket
-
-### init\_daemon
-
-```python
-init_daemon(name) -> DaemonCtxRef
-```
-
-Initializes and returns a new daemon context.
-
-Keyword arguments:
-
-* name -- a string used to uniquely identify the daemon
-
-### install\_crypto\_keys
-
-```python
-install_crypto_keys(dtx) -> None
-```
-
-It is possible to define AES keys inside confd.conf. These keys are used by ConfD to encrypt data which is entered into the system. The supported types are tailf:aes-cfb-128-encrypted-string and tailf:aes-256-cfb-128-encrypted-string. This function will copy those keys from ConfD (which reads confd.conf) into memory in the library.
-
-This function must be called before register\_done() is called.
-
-Keyword arguments:
-
-* dtx -- a daemon context wich is connected through a call to connect()
-
-### nano\_service\_reply\_proplist
-
-```python
-nano_service_reply_proplist(tctx, proplist) -> None
-```
-
-This function must be called with the new property list, immediately prior to returning from the callback, if the stored property list should be updated. If a callback returns without calling nano\_service\_reply\_proplist(), the previous property list is retained. To completely delete the property list, call this function with the proplist argument set to an empty list or None.
-
-The proplist argument should be a list of 2-tuples built up like this: list( (name, value), (name, value), ... ) In a 2-tuple both 'name' and 'value' must be strings.
-
-Keyword arguments:
-
-* tctx -- a transaction context
-* proplist -- a list of properties or None
-
-### notification\_flush
-
-```python
-notification_flush(nctx) -> None
-```
-
-Notifications are sent asynchronously, i.e. normally without blocking the caller of the send functions described above. This means that in some cases ConfD's sending of the notifications on the northbound interfaces may lag behind the send calls. This function can be used to make sure that the notifications have actually been sent out.
-
-Keyword arguments:
-
-* nctx -- notification context returned from register\_notification\_stream()
-
-### notification\_replay\_complete
-
-```python
-notification_replay_complete(nctx) -> None
-```
-
-The application calls this function to notify ConfD that the replay is complete
-
-Keyword arguments:
-
-* nctx -- notification context returned from register\_notification\_stream()
-
-### notification\_replay\_failed
-
-```python
-notification_replay_failed(nctx) -> None
-```
-
-In case the application fails to complete the replay as requested (e.g. the log gets overwritten while the replay is in progress), the application should call this function instead of notification\_replay\_complete(). An error message describing the reason for the failure can be supplied by first calling notification\_seterr() or notification\_seterr\_extended().
-
-Keyword arguments:
-
-* nctx -- notification context returned from register\_notification\_stream()
-
-### notification\_reply\_log\_times
-
-```python
-notification_reply_log_times(nctx, creation, aged) -> None
-```
-
-Reply function for use in the cb\_get\_log\_times() callback invocation. If no notifications have been aged out of the log, give None for the aged argument.
-
-Keyword arguments:
-
-* nctx -- notification context returned from register\_notification\_stream()
-* creation -- a \_lib.DateTime instance
-* aged -- a \_lib.DateTime instance or None
-
-### notification\_send
-
-```python
-notification_send(nctx, time, values) -> None
-```
-
-This function is called by the application to send a notification defined at the top level of a YANG module, whether "live" or replay.
-
-Keyword arguments:
-
-* nctx -- notification context returned from register\_notification\_stream()
-* time -- a \_lib.DateTime instance
-* values -- a list of \_lib.TagValue instances or None
-
-### notification\_send\_path
-
-```python
-notification_send_path(nctx, time, values, path) -> None
-```
-
-This function is called by the application to send a notification defined as a child of a container or list in a YANG 1.1 module, whether "live" or replay.
-
-Keyword arguments:
-
-* nctx -- notification context returned from register\_notification\_stream()
-* time -- a \_lib.DateTime instance
-* values -- a list of \_lib.TagValue instances or None
-* path -- path to the parent of the notification in the data tree
-
-### notification\_send\_snmp
-
-```python
-notification_send_snmp(nctx, notification, varbinds) -> None
-```
-
-Sends the SNMP notification specified by 'notification', without requesting inform-request delivery information. This is equivalent to calling notification\_send\_snmp\_inform() with None as the cb\_id argument. I.e. if the common arguments are the same, the two functions will send the exact same set of traps and inform-requests.
-
-Keyword arguments:
-
-* nctx -- notification context returned from register\_snmp\_notification()
-* notification -- the notification string
-* varbinds -- a list of \_lib.SnmpVarbind instances or None
-
-### notification\_send\_snmp\_inform
-
-```python
-notification_send_snmp_inform(nctx, notification, varbinds, cb_id, ref) ->None
-```
-
-Sends the SNMP notification specified by notification. If cb\_id is not None the callbacks registered for cb\_id will be invoked with the ref argument.
-
-Keyword arguments:
-
-* nctx -- notification context returned from register\_snmp\_notification()
-* notification -- the notification string
-* varbinds -- a list of \_lib.SnmpVarbind instances or None
-* cb\_id -- callback id
-* ref -- argument send to callbacks
-
-### notification\_set\_fd
-
-```python
-notification_set_fd(nctx, sock) -> None
-```
-
-This function may optionally be called by the cb\_replay() callback to request that the worker socket given by 'sock' should be used for the replay. Otherwise the socket specified in register\_notification\_stream() will be used.
-
-Keyword arguments:
-
-* nctx -- notification context returned from register\_notification\_stream()
-* sock -- a previously connected worker socket
-
-### notification\_set\_snmp\_notify\_name
-
-```python
-notification_set_snmp_notify_name(nctx, notify_name) -> None
-```
-
-This function can be used to change the snmpNotifyName (notify\_name) for the nctx context.
-
-Keyword arguments:
-
-* nctx -- notification context returned from register\_snmp\_notification()
-* notify\_name -- the snmpNotifyName
-
-### notification\_set\_snmp\_src\_addr
-
-```python
-notification_set_snmp_src_addr(nctx, family, src_addr) -> None
-```
-
-By default, the source address for the SNMP notifications that are sent by the above functions is chosen by the IP stack of the OS. This function may be used to select a specific source address, given by src\_addr, for the SNMP notifications subsequently sent using the nctx context. The default can be restored by calling the function with family set to AF\_UNSPEC.
-
-Keyword arguments:
-
-* nctx -- notification context returned from register\_snmp\_notification()
-* family -- AF\_INET, AF\_INET6 or AF\_UNSPEC
-* src\_addr -- the source address in string format
-
-### notification\_seterr
-
-```python
-notification_seterr(nctx, errstr) -> None
-```
-
-In some cases the callbacks may be unable to carry out the requested actions, e.g. the capacity for simultaneous replays might be exceeded, and they can then return CONFD\_ERR. This function allows the callback to associate an error message with the failure. It can also be used to supply an error message before calling notification\_replay\_failed().
-
-Keyword arguments:
-
-* nctx -- notification context returned from register\_notification\_stream()
-* errstr -- an error message string
-
-### notification\_seterr\_extended
-
-```python
-notification_seterr_extended(nctx, code, apptag_ns, apptag_tag, errstr) ->None
-```
-
-This function can be used to provide more structured error information from a notification callback.
-
-Keyword arguments:
-
-* nctx -- notification context returned from register\_notification\_stream()
-* code -- an error code
-* apptag\_ns -- namespace - should be set to 0
-* apptag\_tag -- either 0 or the hash value for a data model node
-* errstr -- an error message string
-
-### notification\_seterr\_extended\_info
-
-```python
-notification_seterr_extended_info(nctx, code, apptag_ns, apptag_tag,
- error_info, errstr) -> None
-```
-
-This function can be used to provide structured error information in the same way as notification\_seterr\_extended(), and additionally provide contents for the NETCONF element.
-
-Keyword arguments:
-
-* nctx -- notification context returned from register\_notification\_stream()
-* code -- an error code
-* apptag\_ns -- namespace - should be set to 0
-* apptag\_tag -- either 0 or the hash value for a data model node
-* error\_info -- a list of \_lib.TagValue instances
-* errstr -- an error message string
-
-### register\_action\_cbs
-
-```python
-register_action_cbs(dx, actionpoint, acb) -> None
-```
-
-This function registers up to five callback functions, two of which will be called in sequence when an action is invoked.
-
-Keyword arguments:
-
-* dx -- a daemon context acquired through a call to init\_daemon()
-* actionpoint -- the name of the action point
-* vcb -- the callback instance (see below)
-
-The acb argument should be an instance of a class with callback methods. E.g.:
-
-```
-class ActionCallbacks(object):
- def cb_init(self, uinfo):
- pass
-
- def cb_abort(self, uinfo):
- pass
-
- def cb_action(self, uinfo, name, kp, params):
- pass
-
- def cb_command(self, uinfo, path, argv):
- pass
-
- def cb_completion(self, uinfo, cli_style, token, completion_char,
- kp, cmdpath, cmdparam_id, simpleType, extra):
- pass
-
-acb = ActionCallbacks()
-dp.register_action_cbs(dx, 'actionpoint-1', acb)
-```
-
-Notes about some of the callbacks:
-
-cb\_action() The params argument is a list of \_lib.TagValue instances.
-
-cb\_command() The argv argument is a list of strings.
-
-### register\_auth\_cb
-
-```python
-register_auth_cb(dx, acb) -> None
-```
-
-Registers the authentication callback.
-
-Keyword arguments:
-
-* dx -- a daemon context acquired through a call to init\_daemon()
-* abc -- the callback instance (see below)
-
-E.g.:
-
-```
-class AuthCallbacks(object):
- def cb_auth(self, actx):
- pass
-
-acb = AuthCallbacks()
-dp.register_auth_cb(dx, acb)
-```
-
-### register\_authorization\_cb
-
-```python
-register_authorization_cb(dx, acb, cmd_filter, data_filter) -> None
-```
-
-Keyword arguments:
-
-* dx -- a daemon context acquired through a call to init\_daemon()
-* abc -- the callback instance (see below)
-* cmd\_filter -- set to 0 for no filtering
-* data\_filter -- set to 0 for no filtering
-
-E.g.:
-
-```
-class AuthorizationCallbacks(object):
- def cb_chk_cmd_access(self, actx, cmdtokens, cmdop):
- pass
-
- def cb_chk_data_access(self, actx, hashed_ns, hkp, dataop, how):
- pass
-
-acb = AuthCallbacks()
-dp.register_authorization_cb(dx, acb)
-```
-
-### register\_data\_cb
-
-```python
-register_data_cb(dx, callpoint, data, flags) -> None
-```
-
-Registers data manipulation callback functions.
-
-Keyword arguments:
-
-* dx -- a daemon context acquired through a call to init\_daemon()
-* callpoint -- name of a tailf:callpoint in the data model
-* data -- the callback instance (see below)
-* flags -- data callbacks flags, dp.DATA\_\* (optional)
-
-The data argument should be an instance of a class with callback methods. E.g.:
-
-```
-class DataCallbacks(object):
- def cb_exists_optional(self, tctx, kp):
- pass
-
- def cb_get_elem(self, tctx, kp):
- pass
-
- def cb_get_next(self, tctx, kp, next):
- pass
-
- def cb_set_elem(self, tctx, kp, newval):
- pass
-
- def cb_create(self, tctx, kp):
- pass
-
- def cb_remove(self, tctx, kp):
- pass
-
- def cb_find_next(self, tctx, kp, type, keys):
- pass
-
- def cb_num_instances(self, tctx, kp):
- pass
-
- def cb_get_object(self, tctx, kp):
- pass
-
- def cb_get_next_object(self, tctx, kp, next):
- pass
-
- def cb_find_next_object(self, tctx, kp, type, keys):
- pass
-
- def cb_get_case(self, tctx, kp, choice):
- pass
-
- def cb_set_case(self, tctx, kp, choice, caseval):
- pass
-
- def cb_get_attrs(self, tctx, kp, attrs):
- pass
-
- def cb_set_attr(self, tctx, kp, attr, v):
- pass
-
- def cb_move_after(self, tctx, kp, prevkeys):
- pass
-
- def cb_write_all(self, tctx, kp):
- pass
-
-dcb = DataCallbacks()
-dp.register_data_cb(dx, 'example-callpoint-1', dcb)
-```
-
-### register\_db\_cb
-
-```python
-register_db_cb(dx, dbcbs) -> None
-```
-
-This function is used to set callback functions which span over several ConfD transactions.
-
-Keyword arguments:
-
-* dx -- a daemon context acquired through a call to init\_daemon()
-* dbcbs -- the callback instance (see below)
-
-The dbcbs argument should be an instance of a class with callback methods. E.g.:
-
-```
-class DbCallbacks(object):
- def cb_candidate_commit(self, dbx, timeout):
- pass
-
- def cb_candidate_confirming_commit(self, dbx):
- pass
-
- def cb_candidate_reset(self, dbx):
- pass
-
- def cb_candidate_chk_not_modified(self, dbx):
- pass
-
- def cb_candidate_rollback_running(self, dbx):
- pass
-
- def cb_candidate_validate(self, dbx):
- pass
-
- def cb_add_checkpoint_running(self, dbx):
- pass
-
- def cb_del_checkpoint_running(self, dbx):
- pass
-
- def cb_activate_checkpoint_running(self, dbx):
- pass
-
- def cb_copy_running_to_startup(self, dbx):
- pass
-
- def cb_running_chk_not_modified(self, dbx):
- pass
-
- def cb_lock(self, dbx, dbname):
- pass
-
- def cb_unlock(self, dbx, dbname):
- pass
-
- def cb_lock_partial(self, dbx, dbname, lockid, paths):
- pass
-
- def cb_ulock_partial(self, dbx, dbname, lockid):
- pass
-
- def cb_delete_confid(self, dbx, dbname):
- pass
-
-dbcbs = DbCallbacks()
-dp.register_db_cb(dx, dbcbs)
-```
-
-### register\_done
-
-```python
-register_done(dx) -> None
-```
-
-When we have registered all the callbacks for a daemon (including the other types described below if we have them), we must call this function to synchronize with ConfD. No callbacks will be invoked until it has been called, and after the call, no further registrations are allowed.
-
-Keyword arguments:
-
-* dx -- a daemon context acquired through a call to init\_daemon()
-
-### register\_error\_cb
-
-```python
-register_error_cb(dx, errortypes, ecbs) -> None
-```
-
-This funciton can be used to register error callbacks that are invoked for internally generated errors.
-
-Keyword arguments:
-
-* dx -- a daemon context acquired through a call to init\_daemon()
-* errortypes -- logical OR of the error types that the ecbs should handle
-* ecbs -- the callback instance (see below)
-
-E.g.:
-
-```
-class ErrorCallbacks(object):
- def cb_format_error(self, uinfo, errinfo_dict, default_msg):
- dp.error_seterr(uinfo, default_msg)
-ecbs = ErrorCallbacks()
-dp.register_error_cb(ctx,
- dp.ERRTYPE_BAD_VALUE |
- dp.ERRTYPE_MISC, ecbs)
-dp.register_done(ctx)
-```
-
-### register\_nano\_service\_cb
-
-```python
-register_nano_service_cb(dx,servicepoint,componenttype,state,nscb) -> None
-```
-
-This function registers the nano service callbacks.
-
-Keyword arguments:
-
-* dx -- a daemon context acquired through a call to init\_daemon()
-* servicepoint -- name of the service point (string)
-* componenttype -- name of the plan component for the nano service (string)
-* state -- name of component state for the nano service (string)
-* nscb -- the nano callback instance (see below)
-
-E.g:
-
-```
-class NanoServiceCallbacks(object):
- def cb_nano_create(self, tctx, root, service, plan,
- component, state, proplist, compproplist):
- pass
-
- def cb_nano_delete(self, tctx, root, service, plan,
- component, state, proplist, compproplist):
- pass
-
-nscb = NanoServiceCallbacks()
-dp.register_nano_service_cb(dx, 'service-point-1', 'comp', 'state', nscb)
-```
-
-### register\_notification\_snmp\_inform\_cb
-
-```python
-register_notification_snmp_inform_cb(dx, cb_id, cbs) -> None
-```
-
-If we want to receive information about the delivery of SNMP inform-requests, we must register two callbacks for this.
-
-Keyword arguments:
-
-* dx -- a daemon context acquired through a call to init\_daemon()
-* cb\_id -- the callback identifier
-* cbs -- the callback instance (see below)
-
-E.g.:
-
-```
-class NotifySnmpCallbacks(object):
- def cb_targets(self, nctx, ref, targets):
- pass
-
- def cb_result(self, nctx, ref, target, got_response):
- pass
-
-cbs = NotifySnmpCallbacks()
-dp.register_notification_snmp_inform_cb(dx, 'callback-id-1', cbs)
-```
-
-### register\_notification\_stream
-
-```python
-register_notification_stream(dx, ncbs, sock, streamname) -> NotificationCtxRef
-```
-
-This function registers the notification stream and optionally two callback functions used for the replay functionality.
-
-The returned notification context must be used by the application for the sending of live notifications via notification\_send() or notification\_send\_path().
-
-Keyword arguments:
-
-* dx -- a daemon context acquired through a call to init\_daemon()
-* ncbs -- the callback instance (see below)
-* sock -- a previously connected worker socket
-* streamname -- the name of the notification stream
-
-E.g.:
-
-```
-class NotificationCallbacks(object):
- def cb_get_log_times(self, nctx):
- pass
-
- def cb_replay(self, nctx, start, stop):
- pass
-
-ncbs = NotificationCallbacks()
-livectx = dp.register_notification_stream(dx, ncbs, workersock,
-'streamname')
-```
-
-### register\_notification\_sub\_snmp\_cb
-
-```python
-register_notification_sub_snmp_cb(dx, sub_id, cbs) -> None
-```
-
-Registers a callback function to be called when an SNMP notification is received by the SNMP gateway.
-
-Keyword arguments:
-
-* dx -- a daemon context acquired through a call to init\_daemon()
-* sub\_id -- the subscription id for the notifications
-* cbs -- the callback instance (see below)
-
-E.g.:
-
-```
-class NotifySubSnmpCallbacks(object):
- def cb_recv(self, nctx, notification, varbinds, src_addr, port):
- pass
-
-cbs = NotifySubSnmpCallbacks()
-dp.register_notification_sub_snmp_cb(dx, 'sub-id-1', cbs)
-```
-
-### register\_range\_action\_cbs
-
-```python
-register_range_action_cbs(dx, actionpoint, acb, lower, upper, path) -> None
-```
-
-A variant of register\_action\_cbs() which registers action callbacks for a range of key values. The lower, upper, and path arguments are the same as for register\_range\_data\_cb().
-
-Keyword arguments:
-
-* dx -- a daemon context acquired through a call to init\_daemon()
-* actionpoint -- the name of the action point
-* data -- the callback instance (see register\_action\_cbs())
-* lower -- a list of Value's or None
-* upper -- a list of Value's or None
-* path -- path for the list (string)
-
-### register\_range\_data\_cb
-
-```python
-register_range_data_cb(dx, callpoint, data, lower, upper, path,
- flags) -> None
-```
-
-This is a variant of register\_data\_cb() which registers a set of callbacks for a range of list entries.
-
-Keyword arguments:
-
-* dx -- a daemon context acquired through a call to init\_daemon()
-* callpoint -- name of a tailf:callpoint in the data model
-* data -- the callback instance (see register\_data\_cb())
-* lower -- a list of Value's or None
-* upper -- a list of Value's or None
-* path -- path for the list (string)
-* flags -- data callbacks flags, dp.DATA\_\* (optional)
-
-### register\_range\_valpoint\_cb
-
-```python
-register_range_valpoint_cb(dx, valpoint, vcb, lower, upper, path) -> None
-```
-
-A variant of register\_valpoint\_cb() which registers a validation function for a range of key values. The lower, upper and path arguments are the same as for register\_range\_data\_cb().
-
-Keyword arguments:
-
-* dx -- a daemon context acquired through a call to init\_daemon()
-* valpoint -- name of a validation point
-* data -- the callback instance (see register\_valpoint\_cb())
-* lower -- a list of Value's or None
-* upper -- a list of Value's or None
-* path -- path for the list (string)
-
-### register\_service\_cb
-
-```python
-register_service_cb(dx, servicepoint, scb) -> None
-```
-
-This function registers the service callbacks.
-
-Keyword arguments:
-
-* dx -- a daemon context acquired through a call to init\_daemon()
-* servicepoint -- name of the service point (string)
-* scb -- the callback instance (see below)
-
-E.g:
-
-```
-class ServiceCallbacks(object):
- def cb_create(self, tctx, kp, proplist, fastmap_thandle):
- pass
-
- def cb_pre_modification(self, tctx, op, kp, proplist):
- pass
-
- def cb_post_modification(self, tctx, op, kp, proplist):
- pass
-
-scb = ServiceCallbacks()
-dp.register_service_cb(dx, 'service-point-1', scb)
-```
-
-### register\_snmp\_notification
-
-```python
-register_snmp_notification(dx, sock, notify_name, ctx_name) -> NotificationCtxRef
-```
-
-SNMP notifications can also be sent via the notification framework, however most aspects of the stream concept do not apply for SNMP. This function is used to register a worker socket, the snmpNotifyName (notify\_name), and SNMP context (ctx\_name) to be used for the notifications.
-
-Keyword arguments:
-
-* dx -- a daemon context acquired through a call to init\_daemon()
-* sock -- a previously connected worker socket
-* notify\_name -- the snmpNotifyName
-* ctx\_name -- the SNMP context
-
-### register\_trans\_cb
-
-```python
-register_trans_cb(dx, trans) -> None
-```
-
-Registers transaction callback functions.
-
-Keyword arguments:
-
-* dx -- a daemon context acquired through a call to init\_daemon()
-* trans -- the callback instance (see below)
-
-The trans argument should be an instance of a class with callback methods. E.g.:
-
-```
-class TransCallbacks(object):
- def cb_init(self, tctx):
- pass
-
- def cb_trans_lock(self, tctx):
- pass
-
- def cb_trans_unlock(self, tctx):
- pass
-
- def cb_write_start(self, tctx):
- pass
-
- def cb_prepare(self, tctx):
- pass
-
- def cb_abort(self, tctx):
- pass
-
- def cb_commit(self, tctx):
- pass
-
- def cb_finish(self, tctx):
- pass
-
- def cb_interrupt(self, tctx):
- pass
-
-tcb = TransCallbacks()
-dp.register_trans_cb(dx, tcb)
-```
-
-### register\_trans\_validate\_cb
-
-```python
-register_trans_validate_cb(dx, vcbs) -> None
-```
-
-This function installs two callback functions for the daemon context. One function that gets called when the validation phase starts in a transaction and one when the validation phase stops in a transaction.
-
-Keyword arguments:
-
-* dx -- a daemon context acquired through a call to init\_daemon()
-* vcbs -- the callback instance (see below)
-
-The vcbs argument should be an instance of a class with callback methods. E.g.:
-
-```
-class TransValidateCallbacks(object):
- def cb_init(self, tctx):
- pass
-
- def cb_stop(self, tctx):
- pass
-
-vcbs = TransValidateCallbacks()
-dp.register_trans_validate_cb(dx, vcbs)
-```
-
-### register\_usess\_cb
-
-```python
-register_usess_cb(dx, ucb) -> None
-```
-
-This function can be used to register information callbacks that are invoked for user session start and stop.
-
-Keyword arguments:
-
-* dx -- a daemon context acquired through a call to init\_daemon()
-* ucb -- the callback instance (see below)
-
-E.g.:
-
-```
-class UserSessionCallbacks(object):
- def cb_start(self, dx, uinfo):
- pass
-
- def cb_stop(self, dx, uinfo):
- pass
-
-ucb = UserSessionCallbacks()
-dp.register_usess_cb(dx, acb)
-```
-
-### register\_valpoint\_cb
-
-```python
-register_valpoint_cb(dx, valpoint, vcb) -> None
-```
-
-We must also install an actual validation function for each validation point, i.e. for each tailf:validate statement in the YANG data model.
-
-Keyword arguments:
-
-* dx -- a daemon context acquired through a call to init\_daemon()
-* valpoint -- the name of the validation point
-* vcb -- the callback instance (see below)
-
-The vcb argument should be an instance of a class with a callback method. E.g.:
-
-```
-class ValpointCallback(object):
- def cb_validate(self, tctx, kp, newval):
- pass
-
-vcb = ValpointCallback()
-dp.register_valpoint_cb(dx, 'valpoint-1', vcb)
-```
-
-### release\_daemon
-
-```python
-release_daemon(dx) -> None
-```
-
-Releases all memory that has been allocated by init\_daemon() and other functions for the daemon context. The control socket as well as all the worker sockets must be closed by the application (before or after release\_daemon() has been called).
-
-Keyword arguments:
-
-* dx -- a daemon context acquired through a call to init\_daemon()
-
-### service\_reply\_proplist
-
-```python
-service_reply_proplist(tctx, proplist) -> None
-```
-
-This function must be called with the new property list, immediately prior to returning from the callback, if the stored property list should be updated. If a callback returns without calling service\_reply\_proplist(), the previous property list is retained. To completely delete the property list, call this function with the proplist argument set to an empty list or None.
-
-The proplist argument should be a list of 2-tuples built up like this: list( (name, value), (name, value), ... ) In a 2-tuple both 'name' and 'value' must be strings.
-
-Keyword arguments:
-
-* tctx -- a transaction context
-* proplist -- a list of properties or None
-
-### set\_daemon\_flags
-
-```python
-set_daemon_flags(dx, flags) -> None
-```
-
-Modifies the API behaviour according to the flags ORed into the flags argument.
-
-Keyword arguments:
-
-* dx -- a daemon context acquired through a call to init\_daemon()
-* flags -- the flags to set
-
-### trans\_set\_fd
-
-```python
-trans_set_fd(tctx, sock) -> None
-```
-
-Associate a worker socket with the transaction, or validation phase. This function must be called in the transaction and validation cb\_init() callbacks.
-
-Keyword arguments:
-
-* tctx -- a transaction context
-* sock -- a previously connected worker socket
-
-A minimal implementation of a transaction cb\_init() callback looks like:
-
-```
-class TransCb(object):
- def __init__(self, workersock):
- self.workersock = workersock
-
- def cb_init(self, tctx):
- dp.trans_set_fd(tctx, self.workersock)
-```
-
-### trans\_seterr
-
-```python
-trans_seterr(tctx, errstr) -> None
-```
-
-This function is used by the application to set an error string.
-
-Keyword arguments:
-
-* tctx -- a transaction context
-* errstr -- an error message string
-
-### trans\_seterr\_extended
-
-```python
-trans_seterr_extended(tctx, code, apptag_ns, apptag_tag, errstr) -> None
-```
-
-This function can be used to provide more structured error information from a transaction or data callback.
-
-Keyword arguments:
-
-* tctx -- a transaction context
-* code -- an error code
-* apptag\_ns -- namespace - should be set to 0
-* apptag\_tag -- either 0 or the hash value for a data model node
-* errstr -- an error message string
-
-### trans\_seterr\_extended\_info
-
-```python
-trans_seterr_extended_info(tctx, code, apptag_ns, apptag_tag,
- error_info, errstr) -> None
-```
-
-This function can be used to provide structured error information in the same way as trans\_seterr\_extended(), and additionally provide contents for the NETCONF element.
-
-Keyword arguments:
-
-* tctx -- a transaction context
-* code -- an error code
-* apptag\_ns -- namespace - should be set to 0
-* apptag\_tag -- either 0 or the hash value for a data model node
-* error\_info -- a list of \_lib.TagValue instances
-* errstr -- an error message string
-
-## Classes
-
-### _class_ **AuthCtxRef**
-
-This type represents the c-type struct confd\_auth\_ctx.
-
-Available attributes:
-
-* uinfo -- the user info (UserInfo)
-* method -- the method (string)
-* success -- success or failure (bool)
-* groups -- authorization groups if success is True (list of strings)
-* logno -- log number if success is False (int)
-* reason -- error reason if success is False (string)
-
-AuthCtxRef cannot be directly instantiated from Python.
-
-Members:
-
-_None_
-
-### _class_ **AuthorizationCtxRef**
-
-This type represents the c-type struct confd\_authorization\_ctx.
-
-Available attributes:
-
-* uinfo -- the user info (UserInfo) or None
-* groups -- authorization groups (list of strings) or None
-
-AuthorizationCtxRef cannot be directly instantiated from Python.
-
-Members:
-
-_None_
-
-### _class_ **DaemonCtxRef**
-
-struct confd\_daemon\_ctx references object
-
-Members:
-
-_None_
-
-### _class_ **DbCtxRef**
-
-This type represents the c-type struct confd\_db\_ctx.
-
-DbCtxRef cannot be directly instantiated from Python.
-
-Members:
-
-
-
-did(...)
-
-Method:
-
-```python
-did() -> int
-```
-
-
-
-
-
-dx(...)
-
-Method:
-
-```python
-dx() -> DaemonCtxRef
-```
-
-
-
-
-
-lastop(...)
-
-Method:
-
-```python
-lastop() -> int
-```
-
-
-
-
-
-qref(...)
-
-Method:
-
-```python
-qref() -> int
-```
-
-
-
-
-
-uinfo(...)
-
-Method:
-
-```python
-uinfo() -> _ncs.UserInfo
-```
-
-
-
-### _class_ **ListFilter**
-
-This type represents the c-type struct confd\_list\_filter.
-
-Available attributes:
-
-* type -- filter type, LF\_\*
-* expr1 -- OR, AND, NOT expression
-* expr2 -- OR, AND expression
-* op -- operation, CMP\_\* and EXEC\_\*
-* node -- filter tagpath
-* val -- filter value
-
-ListFilter cannot be directly instantiated from Python.
-
-Members:
-
-_None_
-
-### _class_ **NotificationCtxRef**
-
-This type represents the c-type struct confd\_notification\_ctx.
-
-Available attributes:
-
-* name -- stream name or snmp notify name (string or None)
-* ctx\_name -- for snmp only (string or None)
-* fd -- worker socket (int)
-* dx -- the daemon context (DaemonCtxRef)
-
-NotificationCtxRef cannot be directly instantiated from Python.
-
-Members:
-
-_None_
-
-### _class_ **TrItemRef**
-
-This type represents the c-type confd\_tr\_item.
-
-Available attributes:
-
-* callpoint -- the callpoint (string)
-* op -- operation, one of C\_SET\_ELEM, C\_CREATE, C\_REMOVE, C\_SET\_CASE, C\_SET\_ATTR or C\_MOVE\_AFTER (int)
-* hkp -- the keypath (HKeypathRef)
-* val -- the value (Value or None)
-* choice -- the choice, only for C\_SET\_CASE (Value or None)
-* attr -- attribute, only for C\_SET\_ATTR (int or None)
-* next -- the next TrItemRef object in the linked list or None if no more items are found
-
-TrItemRef cannot be directly instantiated from Python.
-
-Members:
-
-_None_
-
-## Predefined Values
-
-```python
-
-ACCESS_CHK_DESCENDANT = 1024
-ACCESS_CHK_FINAL = 512
-ACCESS_CHK_INTERMEDIATE = 256
-ACCESS_OP_CREATE = 4
-ACCESS_OP_DELETE = 16
-ACCESS_OP_EXECUTE = 2
-ACCESS_OP_READ = 1
-ACCESS_OP_UPDATE = 8
-ACCESS_OP_WRITE = 32
-ACCESS_RESULT_ACCEPT = 0
-ACCESS_RESULT_CONTINUE = 2
-ACCESS_RESULT_DEFAULT = 3
-ACCESS_RESULT_REJECT = 1
-BAD_VALUE_BAD_KEY_TAG = 32
-BAD_VALUE_BAD_LEXICAL = 19
-BAD_VALUE_BAD_TAG = 21
-BAD_VALUE_BAD_VALUE = 20
-BAD_VALUE_CUSTOM_FACET_ERROR_MESSAGE = 16
-BAD_VALUE_ENUMERATION = 11
-BAD_VALUE_FRACTION_DIGITS = 3
-BAD_VALUE_INVALID_FACET = 18
-BAD_VALUE_INVALID_REGEX = 9
-BAD_VALUE_INVALID_TYPE_NAME = 23
-BAD_VALUE_INVALID_UTF8 = 38
-BAD_VALUE_INVALID_XPATH = 34
-BAD_VALUE_INVALID_XPATH_AT_TAG = 40
-BAD_VALUE_INVALID_XPATH_PATH = 39
-BAD_VALUE_LENGTH = 15
-BAD_VALUE_MAX_EXCLUSIVE = 5
-BAD_VALUE_MAX_INCLUSIVE = 6
-BAD_VALUE_MAX_LENGTH = 14
-BAD_VALUE_MIN_EXCLUSIVE = 7
-BAD_VALUE_MIN_INCLUSIVE = 8
-BAD_VALUE_MIN_LENGTH = 13
-BAD_VALUE_MISSING_KEY = 37
-BAD_VALUE_MISSING_NAMESPACE = 27
-BAD_VALUE_NOT_RESTRICTED_XPATH = 35
-BAD_VALUE_NO_DEFAULT_NAMESPACE = 24
-BAD_VALUE_PATTERN = 12
-BAD_VALUE_POP_TOO_FAR = 31
-BAD_VALUE_RANGE = 29
-BAD_VALUE_STRING_FUN = 1
-BAD_VALUE_SYMLINK_BAD_KEY_REFERENCE = 33
-BAD_VALUE_TOTAL_DIGITS = 4
-BAD_VALUE_UNIQUELIST = 10
-BAD_VALUE_UNKNOWN_BIT_LABEL = 22
-BAD_VALUE_UNKNOWN_NAMESPACE = 26
-BAD_VALUE_UNKNOWN_NAMESPACE_PREFIX = 25
-BAD_VALUE_USER_ERROR = 17
-BAD_VALUE_VALUE2VALUE_FUN = 28
-BAD_VALUE_WRONG_DECIMAL64_FRACTION_DIGITS = 2
-BAD_VALUE_WRONG_NUMBER_IDENTIFIERS = 30
-BAD_VALUE_XPATH_ERROR = 36
-CLI_ACTION_NOT_FOUND = 13
-CLI_AMBIGUOUS_COMMAND = 63
-CLI_BAD_ACTION_RESPONSE = 16
-CLI_BAD_LEAF_VALUE = 6
-CLI_CDM_NOT_SUPPORTED = 74
-CLI_COMMAND_ABORTED = 2
-CLI_COMMAND_ERROR = 1
-CLI_COMMAND_FAILED = 3
-CLI_CONFIRMED_NOT_SUPPORTED = 39
-CLI_COPY_CONFIG_FAILED = 32
-CLI_COPY_FAILED = 31
-CLI_COPY_PATH_IDENTICAL = 33
-CLI_CREATE_PATH = 23
-CLI_CUSTOM_ERROR = 4
-CLI_DELETE_ALL_FAILED = 10
-CLI_DELETE_ERROR = 12
-CLI_DELETE_FAILED = 11
-CLI_ELEMENT_DOES_NOT_EXIST = 66
-CLI_ELEMENT_MANDATORY = 75
-CLI_ELEMENT_NOT_FOUND = 14
-CLI_ELEM_NOT_WRITABLE = 7
-CLI_EXPECTED_BOL = 56
-CLI_EXPECTED_EOL = 57
-CLI_FAILED_COPY_RUNNING = 38
-CLI_FAILED_CREATE_CONTEXT = 37
-CLI_FAILED_OPEN_STARTUP = 41
-CLI_FAILED_OPEN_STARTUP_CONFIG = 42
-CLI_FAILED_TERM_REDIRECT = 49
-CLI_ILLEGAL_DIRECTORY_NAME = 52
-CLI_ILLEGAL_FILENAME = 53
-CLI_INCOMPLETE_CMD_PATH = 67
-CLI_INCOMPLETE_COMMAND = 9
-CLI_INCOMPLETE_PATH = 8
-CLI_INCOMPLETE_PATTERN = 64
-CLI_INVALID_PARAMETER = 54
-CLI_INVALID_PASSWORD = 21
-CLI_INVALID_PATH = 58
-CLI_INVALID_ROLLBACK_NR = 15
-CLI_INVALID_SELECT = 59
-CLI_MESSAGE_TOO_LARGE = 48
-CLI_MISSING_ACTION_PARAM = 17
-CLI_MISSING_ACTION_PARAM_VALUE = 18
-CLI_MISSING_ARGUMENT = 69
-CLI_MISSING_DISPLAY_GROUP = 55
-CLI_MISSING_ELEMENT = 65
-CLI_MISSING_VALUE = 68
-CLI_MOVE_FAILED = 30
-CLI_MUST_BE_AN_INTEGER = 70
-CLI_MUST_BE_INTEGER = 43
-CLI_MUST_BE_TRUE_OR_FALSE = 71
-CLI_NOT_ALLOWED = 5
-CLI_NOT_A_DIRECTORY = 50
-CLI_NOT_A_FILE = 51
-CLI_NOT_FOUND = 28
-CLI_NOT_SUPPORTED = 34
-CLI_NOT_WRITABLE = 27
-CLI_NO_SUCH_ELEMENT = 45
-CLI_NO_SUCH_SESSION = 44
-CLI_NO_SUCH_USER = 47
-CLI_ON_LINE = 25
-CLI_ON_LINE_DESC = 26
-CLI_OPEN_FILE = 20
-CLI_READ_ERROR = 19
-CLI_REALLOCATE = 24
-CLI_SENSITIVE_DATA = 73
-CLI_SET_FAILED = 29
-CLI_START_REPLAY_FAILED = 72
-CLI_TARGET_EXISTS = 35
-CLI_UNKNOWN_ARGUMENT = 61
-CLI_UNKNOWN_COMMAND = 62
-CLI_UNKNOWN_ELEMENT = 60
-CLI_UNKNOWN_HIDEGROUP = 22
-CLI_UNKNOWN_MODE = 36
-CLI_WILDCARD_NOT_ALLOWED = 46
-CLI_WRITE_CONFIG_FAILED = 40
-COMPLETION = 0
-COMPLETION_DEFAULT = 3
-COMPLETION_DESC = 2
-COMPLETION_INFO = 1
-CONTROL_SOCKET = 0
-C_CREATE = 2
-C_MOVE_AFTER = 6
-C_REMOVE = 3
-C_SET_ATTR = 5
-C_SET_CASE = 4
-C_SET_ELEM = 1
-DAEMON_FLAG_BULK_GET_CONTAINER = 128
-DAEMON_FLAG_NO_DEFAULTS = 4
-DAEMON_FLAG_PREFER_BULK_GET = 64
-DAEMON_FLAG_REG_DONE = 65536
-DAEMON_FLAG_REG_REPLACE_DISCONNECT = 16
-DAEMON_FLAG_SEND_IKP = 1
-DAEMON_FLAG_STRINGSONLY = 2
-DATA_AFTER = 1
-DATA_BEFORE = 0
-DATA_CREATE = 0
-DATA_DELETE = 1
-DATA_FIRST = 2
-DATA_INSERT = 2
-DATA_LAST = 3
-DATA_MERGE = 3
-DATA_MOVE = 4
-DATA_REMOVE = 6
-DATA_REPLACE = 5
-DATA_WANT_FILTER = 1
-ERRTYPE_BAD_VALUE = 2
-ERRTYPE_CLI = 4
-ERRTYPE_MISC = 8
-ERRTYPE_NCS = 16
-ERRTYPE_OPERATION = 32
-ERRTYPE_VALIDATION = 1
-MISC_ACCESS_DENIED = 5
-MISC_APPLICATION = 19
-MISC_APPLICATION_INTERNAL = 20
-MISC_BAD_PERSIST_ID = 16
-MISC_CANDIDATE_ABORT_BAD_USID = 17
-MISC_CDB_OPER_UNAVAILABLE = 37
-MISC_DATA_MISSING = 44
-MISC_EXTERNAL = 22
-MISC_EXTERNAL_TIMEOUT = 45
-MISC_FILE_ACCESS_PATH = 33
-MISC_FILE_BAD_PATH = 34
-MISC_FILE_BAD_VALUE = 35
-MISC_FILE_CORRUPT = 52
-MISC_FILE_CREATE_PATH = 29
-MISC_FILE_DELETE_PATH = 32
-MISC_FILE_EOF = 36
-MISC_FILE_MOVE_PATH = 30
-MISC_FILE_OPEN_ERROR = 27
-MISC_FILE_SET_PATH = 31
-MISC_FILE_SYNTAX_ERROR = 28
-MISC_FILE_SYNTAX_ERROR_1 = 26
-MISC_HA_ABORT = 55
-MISC_INCONSISTENT_VALUE = 7
-MISC_INDEXED_VIEW_LIST_HOLE = 46
-MISC_INDEXED_VIEW_LIST_TOO_BIG = 18
-MISC_INTERNAL = 21
-MISC_INTERRUPT = 10
-MISC_IN_USE = 3
-MISC_LOCKED_BY = 4
-MISC_MISSING_INSTANCE = 8
-MISC_NODE_IS_READONLY = 13
-MISC_NODE_WAS_READONLY = 14
-MISC_NOT_IMPLEMENTED = 43
-MISC_NO_SUCH_FILE = 2
-MISC_OPERATION_NOT_SUPPORTED = 38
-MISC_PROTO_USAGE = 23
-MISC_REACHED_MAX_RETRIES = 56
-MISC_RESOLVE_NEEDED = 53
-MISC_RESOURCE_DENIED = 6
-MISC_ROLLBACK_DISABLED = 1
-MISC_ROTATE_LIST_KEY = 58
-MISC_SNMP_BAD_INDEX = 42
-MISC_SNMP_BAD_VALUE = 41
-MISC_SNMP_ERROR = 39
-MISC_SNMP_TIMEOUT = 40
-MISC_SUBAGENT_DOWN = 24
-MISC_SUBAGENT_ERROR = 25
-MISC_TOO_MANY_SESSIONS = 11
-MISC_TOO_MANY_TRANSACTIONS = 12
-MISC_TRANSACTION_CONFLICT = 54
-MISC_UNSUPPORTED_XML_ENCODING = 57
-MISC_UPGRADE_IN_PROGRESS = 15
-MISC_WHEN_FAILED = 9
-MISC_XPATH_COMPILE = 51
-NCS_BAD_AUTHGROUP_CALLBACK_RESPONSE = 104
-NCS_BAD_CAPAS = 14
-NCS_CALL_HOME = 107
-NCS_CLI_LOAD = 19
-NCS_COMMIT_QUEUED = 20
-NCS_COMMIT_QUEUED_AND_DELETED = 113
-NCS_COMMIT_QUEUE_DISABLED = 111
-NCS_COMMIT_QUEUE_HAS_OVERLAPPING = 103
-NCS_COMMIT_QUEUE_HAS_SENTINEL = 75
-NCS_CONFIG_LOCKED = 84
-NCS_CONFLICTING_INTENT = 125
-NCS_CONNECTION_CLOSED = 10
-NCS_CONNECTION_REFUSED = 5
-NCS_CONNECTION_TIMEOUT = 8
-NCS_CQ_BLOCK_OTHERS = 21
-NCS_CQ_REMOTE_NOT_ENABLED = 22
-NCS_DEV_AUTH_FAILED = 1
-NCS_DEV_IN_USE = 81
-NCS_HOST_LOOKUP = 12
-NCS_LOCKED = 3
-NCS_NCS_ACTION_NO_TRANSACTION = 67
-NCS_NCS_ALREADY_EXISTS = 82
-NCS_NCS_CLUSTER_AUTH_FAILED = 74
-NCS_NCS_DEV_ERROR = 69
-NCS_NCS_ERROR = 68
-NCS_NCS_ERROR_IKP = 70
-NCS_NCS_LOAD_TEMPLATE_COPY_TREE_CROSS_NS = 96
-NCS_NCS_LOAD_TEMPLATE_DUPLICATE_MACRO = 119
-NCS_NCS_LOAD_TEMPLATE_EOF_XML = 33
-NCS_NCS_LOAD_TEMPLATE_EXTRA_MACRO_VARS = 118
-NCS_NCS_LOAD_TEMPLATE_INVALID_CBTYPE = 128
-NCS_NCS_LOAD_TEMPLATE_INVALID_PI_REGEX = 122
-NCS_NCS_LOAD_TEMPLATE_INVALID_PI_SYNTAX = 86
-NCS_NCS_LOAD_TEMPLATE_INVALID_VALUE_XML = 30
-NCS_NCS_LOAD_TEMPLATE_MISPLACED_IF_NED_ID_MATCH_XML = 121
-NCS_NCS_LOAD_TEMPLATE_MISPLACED_IF_NED_ID_XML = 110
-NCS_NCS_LOAD_TEMPLATE_MISSING_ELEMENT2_XML = 98
-NCS_NCS_LOAD_TEMPLATE_MISSING_ELEMENT_XML = 29
-NCS_NCS_LOAD_TEMPLATE_MISSING_MACRO_VARS = 117
-NCS_NCS_LOAD_TEMPLATE_MULTIPLE_ELEMENTS_XML = 38
-NCS_NCS_LOAD_TEMPLATE_MULTIPLE_KEY_LEAFS_XML = 77
-NCS_NCS_LOAD_TEMPLATE_MULTIPLE_SP_XML = 35
-NCS_NCS_LOAD_TEMPLATE_SHADOWED_NED_ID_XML = 109
-NCS_NCS_LOAD_TEMPLATE_TAG_AMBIGUOUS_XML = 102
-NCS_NCS_LOAD_TEMPLATE_TRAILING_XML = 32
-NCS_NCS_LOAD_TEMPLATE_UNCLOSED_PI = 88
-NCS_NCS_LOAD_TEMPLATE_UNEXPECTED_PI = 89
-NCS_NCS_LOAD_TEMPLATE_UNKNOWN_ATTRIBUTE_XML = 31
-NCS_NCS_LOAD_TEMPLATE_UNKNOWN_ELEMENT2_XML = 97
-NCS_NCS_LOAD_TEMPLATE_UNKNOWN_ELEMENT_XML = 36
-NCS_NCS_LOAD_TEMPLATE_UNKNOWN_MACRO = 116
-NCS_NCS_LOAD_TEMPLATE_UNKNOWN_NED_ID_XML = 99
-NCS_NCS_LOAD_TEMPLATE_UNKNOWN_NS_XML = 37
-NCS_NCS_LOAD_TEMPLATE_UNKNOWN_PI = 85
-NCS_NCS_LOAD_TEMPLATE_UNKNOWN_SP_XML = 34
-NCS_NCS_LOAD_TEMPLATE_UNMATCHED_PI = 87
-NCS_NCS_LOAD_TEMPLATE_UNSUPPORTED_NED_ID_AT_TAG_XML = 101
-NCS_NCS_LOAD_TEMPLATE_UNSUPPORTED_NED_ID_XML = 100
-NCS_NCS_LOAD_TEMPLATE_UNSUPPORTED_NETCONF_YANG_ATTRIBUTES = 126
-NCS_NCS_MISSING_CLUSTER_AUTH = 73
-NCS_NCS_MISSING_VARIABLES = 52
-NCS_NCS_NED_MULTI_ERROR = 76
-NCS_NCS_NO_CAPABILITIES = 64
-NCS_NCS_NO_DIFF = 71
-NCS_NCS_NO_FORWARD_DIFF = 72
-NCS_NCS_NO_NAMESPACE = 65
-NCS_NCS_NO_SP_TEMPLATE = 48
-NCS_NCS_NO_TEMPLATE = 47
-NCS_NCS_NO_TEMPLATE_XML = 23
-NCS_NCS_NO_WRITE_TRANSACTION = 66
-NCS_NCS_OPERATION_LOCKED = 83
-NCS_NCS_PACKAGE_SYNC_MISMATCHED_LOAD_PATH = 123
-NCS_NCS_SERVICE_CONFLICT = 78
-NCS_NCS_TEMPLATE_CONTEXT_NODE_NOEXISTS = 90
-NCS_NCS_TEMPLATE_COPY_TREE_BAD_OP = 94
-NCS_NCS_TEMPLATE_FOREACH = 51
-NCS_NCS_TEMPLATE_FOREACH_XML = 28
-NCS_NCS_TEMPLATE_GUARD_LENGTH = 59
-NCS_NCS_TEMPLATE_GUARD_LENGTH_XML = 44
-NCS_NCS_TEMPLATE_INSERT = 55
-NCS_NCS_TEMPLATE_INSERT_XML = 40
-NCS_NCS_TEMPLATE_LONE_GUARD = 57
-NCS_NCS_TEMPLATE_LONE_GUARD_XML = 42
-NCS_NCS_TEMPLATE_LOOP_PREVENTION = 95
-NCS_NCS_TEMPLATE_MISSING_VALUE = 56
-NCS_NCS_TEMPLATE_MISSING_VALUE_XML = 41
-NCS_NCS_TEMPLATE_MOVE = 60
-NCS_NCS_TEMPLATE_MOVE_XML = 45
-NCS_NCS_TEMPLATE_MULTIPLE_CONTEXT_NODES = 92
-NCS_NCS_TEMPLATE_NOT_CREATED = 80
-NCS_NCS_TEMPLATE_NOT_CREATED_XML = 79
-NCS_NCS_TEMPLATE_ORDERED_LIST = 54
-NCS_NCS_TEMPLATE_ORDERED_LIST_XML = 39
-NCS_NCS_TEMPLATE_ROOT_LEAF_LIST = 93
-NCS_NCS_TEMPLATE_SAVED_CONTEXT_NOEXISTS = 91
-NCS_NCS_TEMPLATE_STR2VAL = 61
-NCS_NCS_TEMPLATE_STR2VAL_XML = 46
-NCS_NCS_TEMPLATE_UNSUPPORTED_NED_ID = 112
-NCS_NCS_TEMPLATE_VALUE_LENGTH = 58
-NCS_NCS_TEMPLATE_VALUE_LENGTH_XML = 43
-NCS_NCS_TEMPLATE_WHEN = 50
-NCS_NCS_TEMPLATE_WHEN_KEY_XML = 27
-NCS_NCS_TEMPLATE_WHEN_XML = 26
-NCS_NCS_XPATH = 53
-NCS_NCS_XPATH_COMPILE = 49
-NCS_NCS_XPATH_COMPILE_XML = 24
-NCS_NCS_XPATH_VARBIND = 63
-NCS_NCS_XPATH_XML = 25
-NCS_NED_EXTERNAL_ERROR = 6
-NCS_NED_INTERNAL_ERROR = 7
-NCS_NED_OFFLINE_UNAVAILABLE = 108
-NCS_NED_OUT_OF_SYNC = 18
-NCS_NONED = 15
-NCS_NO_EXISTS = 2
-NCS_NO_TEMPLATE = 62
-NCS_NO_YANG_MODULES = 16
-NCS_NS_SUPPORT = 13
-NCS_OVERLAPPING_PRESENCE_AND_ABSENCE_ASSERTION_COMPLIANCE_TEMPLATE = 127
-NCS_OVERLAPPING_STRICT_ASSERTION_COMPLIANCE_TEMPLATE = 129
-NCS_PLAN_LOCATION = 120
-NCS_REVDROP = 17
-NCS_RPC_ERROR = 9
-NCS_SERVICE_CREATE = 0
-NCS_SERVICE_DELETE = 2
-NCS_SERVICE_UPDATE = 1
-NCS_SESSION_LIMIT_EXCEEDED = 115
-NCS_SOUTHBOUND_LOCKED = 4
-NCS_UNKNOWN_NED_ID = 105
-NCS_UNKNOWN_NED_IDS_COMPLIANCE_TEMPLATE = 124
-NCS_UNKNOWN_NED_ID_DEVICE_TEMPLATE = 106
-NCS_XML_PARSE = 11
-NCS_YANGLIB_NO_SCHEMA_FOR_RUNNING = 114
-OPERATION_CASE_EXISTS = 13
-PATCH_FLAG_AAA_CHECKED = 8
-PATCH_FLAG_BUFFER_DAMPENED = 2
-PATCH_FLAG_FILTER = 4
-PATCH_FLAG_INCOMPLETE = 1
-WORKER_SOCKET = 1
-```
diff --git a/developer-reference/pyapi/_ncs.error.md b/developer-reference/pyapi/_ncs.error.md
deleted file mode 100644
index c61c337c..00000000
--- a/developer-reference/pyapi/_ncs.error.md
+++ /dev/null
@@ -1,88 +0,0 @@
-# Python _ncs.error Module
-
-This module defines new NCS Python API exception classes.
-
-Instead of checking for CONFD_ERR or CONFD_EOF return codes all Python
-module APIs raises an exception instead.
-
-## Classes
-
-### _class_ **EOF**
-
-This exception will be thrown from an API function that, from a C perspective,
-would result in a CONFD_EOF return value.
-
-Members:
-
-
-
-add_note(...)
-
-Method:
-
-Exception.add_note(note) --
-add a note to the exception
-
-
-
-
-
-args
-
-
-
-
-
-
-with_traceback(...)
-
-Method:
-
-Exception.with_traceback(tb) --
-set self.__traceback__ to tb and return self.
-
-
-
-### _class_ **Error**
-
-This exception will be thrown from an API function that, from a C perspective,
-would result in a CONFD_ERR return value.
-
-Available attributes:
-
-* confd_errno -- the underlying error number
-* confd_strerror -- string representation of the confd_errno
-* confd_lasterr -- string with additional textual information
-* strerror -- os error string (available if confd_errno is CONFD_ERR_OS)
-
-Members:
-
-
-
-add_note(...)
-
-Method:
-
-Exception.add_note(note) --
-add a note to the exception
-
-
-
-
-
-args
-
-
-
-
-
-
-with_traceback(...)
-
-Method:
-
-Exception.with_traceback(tb) --
-set self.__traceback__ to tb and return self.
-
-
-
diff --git a/developer-reference/pyapi/_ncs.events.md b/developer-reference/pyapi/_ncs.events.md
deleted file mode 100644
index 2fc74f74..00000000
--- a/developer-reference/pyapi/_ncs.events.md
+++ /dev/null
@@ -1,405 +0,0 @@
-# \_ncs.events Module
-
-Low level module for subscribing to NCS event notifications.
-
-This module is used to connect to NCS and subscribe to certain events generated by NCS. The API to receive events from NCS is a socket based API whereby the application connects to NCS and receives events on a socket. See also the Notifications chapter in the User Guide. The program misc/notifications/confd\_notifications.c in the examples collection illustrates subscription and processing for all these events, and can also be used standalone in a development environment to monitor NCS events.
-
-This documentation should be read together with the [confd\_lib\_events(3)](../../resources/man/confd_lib_events.3.md) man page.
-
-## Functions
-
-### diff\_notification\_done
-
-```python
-diff_notification_done(sock, tctx) -> None
-```
-
-If the received event was NOTIF\_COMMIT\_DIFF it is important that we call this function when we are done reading the transaction diffs over MAAPI. The transaction is hanging until this function gets called. This function also releases memory associated to the transaction in the library.
-
-Keyword arguments:
-
-* sock -- a previously connected notification socket
-* tctx -- a transaction context
-
-### notifications\_connect
-
-```python
-notifications_connect(sock, mask, ip, port, path) -> None
-```
-
-This function creates a notification socket.
-
-Keyword arguments:
-
-* sock -- a Python socket instance
-* mask -- a bitmask of one or several notification type values
-* ip -- the ip address if socket is AF\_INET (optional)
-* port -- the port if socket is AF\_INET (optional)
-* path -- a filename if socket is AF\_UNIX (optional).
-
-### notifications\_connect2
-
-```python
-notifications_connect2(sock, mask, data, ip, port, path) -> None
-```
-
-This variant of notifications\_connect is required if we wish to subscribe to NOTIF\_HEARTBEAT, NOTIF\_HEALTH\_CHECK, or NOTIF\_STREAM\_EVENT events.
-
-Keyword arguments:
-
-* sock -- a Python socket instance
-* mask -- a bitmask of one or several notification type values
-* data -- a \_events.NotificationsData instance
-* ip -- the ip address if socket is AF\_INET (optional)
-* port -- the port if socket is AF\_INET (optional)
-* path -- a filename if socket is AF\_UNIX (optional)
-
-### read\_notification
-
-```python
-read_notification(sock) -> dict
-```
-
-The application is responsible for polling the notification socket. Once data is available to be read on the socket the application must call read\_notification() to read the data from the socket. On success a dictionary containing notification information will be returned (see below).
-
-Keyword arguments:
-
-* sock -- a previously connected notification socket
-
-On success the returned dict will contain information corresponding to the c struct confd\_notification. The notification type is accessible through the 'type' key. The remaining information will be different depending on which type of notification this is (described below).
-
-Keys for type NOTIF\_AUDIT (struct confd\_audit\_notification):
-
-* logno
-* user
-* msg
-* usid
-
-Keys for type NOTIF\_DAEMON, NOTIF\_NETCONF, NOTIF\_DEVEL, NOTIF\_JSONRPC, NOTIF\_WEBUI, or NOTIF\_TAKEOVER\_SYSLOG (struct confd\_syslog\_notification):
-
-* prio
-* logno
-* msg
-
-Keys for type NOTIF\_COMMIT\_SIMPLE (struct confd\_commit\_notification):
-
-* database
-* diff\_available
-* flags
-* uinfo
-
-Keys for type NOTIF\_COMMIT\_DIFF (struct confd\_commit\_diff\_notification):
-
-* database
-* flags
-* uinfo
-* tctx
-* label (optional)
-* comment (optional)
-
-Keys for type NOTIF\_USER\_SESSION (struct confd\_user\_sess\_notification):
-
-* type
-* uinfo
-* database
-
-Keys for type NOTIF\_HA\_INFO (struct confd\_ha\_notification):
-
-* type (1)
-* noprimary - if (1) is HA\_INFO\_NOPRIMARY
-* secondary\_died - if (1) is HA\_INFO\_SECONDARY\_DIED (see below)
-* secondary\_arrived - if (1) is HA\_INFO\_SECONDARY\_ARRIVED (see below)
-* cdb\_initialized\_by\_copy - if (1) is HA\_INFO\_SECONDARY\_INITIALIZED
-* besecondary\_result - if (1) is HA\_INFO\_BESECONDARY\_RESULT
-
-If secondary\_died or secondary\_arrived is present they will in turn contain a dictionary with the following keys:
-
-* nodeid
-* af (1)
-* ip4 - if (1) is AF\_INET
-* ip6 - if (1) is AF\_INET6
-* str - if (1) if AF\_UNSPEC
-
-Keys for type NOTIF\_SUBAGENT\_INFO (struct confd\_subagent\_notification):
-
-* type
-* name
-
-Keys for type NOTIF\_COMMIT\_FAILED (struct confd\_commit\_failed\_notification):
-
-* provider (1)
-* dbname
-* port - if (1) is DP\_NETCONF
-* af (2) - if (1) is DP\_NETCONF
-* ip4 - if (2) is AF\_INET
-* ip6 - if (2) is AF\_INET6
-* daemon\_name - if (1) is DP\_EXTERNAL
-
-Keys for type NOTIF\_SNMPA (struct confd\_snmpa\_notification):
-
-* pdu\_type (1)
-* request\_id
-* error\_status
-* error\_index
-* port
-* af (2)
-* ip4 - if (3) is AF\_INET
-* ip6 - if (3) is AF\_INET6
-* vb (optional)
-* generic\_trap - if (1) is SNMPA\_PDU\_V1TRAP
-* specific\_trap - if (1) is SNMPA\_PDU\_V1TRAP
-* time\_stamp - if (1) is SNMPA\_PDU\_V1TRAP
-* enterprise - if (1) is SNMPA\_PDU\_V1TRAP (optional)
-
-Keys for type NOTIF\_FORWARD\_INFO (struct confd\_forward\_notification):
-
-* type
-* target
-* uinfo
-
-Keys for type NOTIF\_CONFIRMED\_COMMIT (struct confd\_confirmed\_commit\_notification):
-
-* type
-* timeout
-* uinfo
-
-Keys for type NOTIF\_UPGRADE\_EVENT (struct confd\_upgrade\_notification):
-
-* event
-
-Keys for type NOTIF\_COMPACTION (struct confd\_compaction\_notification):
-
-* dbfile (1) - name of the compacted file
-* type - automatic or manual
-* fsize\_start - size at start (bytes)
-* fsize\_end - size at end (bytes)
-* fsize\_last - size at end of last compaction (bytes)
-* time\_start - start time (microseconds)
-* duration - duration (microseconds)
-* ntrans - number of transactions written to (1) since last compaction
-
-Keys for type NOTIF\_COMMIT\_PROGRESS and NOTIF\_PROGRESS (struct confd\_progress\_notification):
-
-* type (1)
-* timestamp
-* duration if (1) is CONFD\_PROGRESS\_STOP
-* trace\_id (optional)
-* span\_id
-* parent\_span\_id (optional)
-* usid
-* tid
-* datastore
-* context (optional)
-* subsystem (optional)
-* msg (optional)
-* annotation (optional)
-* num\_attributes
-* attributes (optional)
-* num\_links
-* links (optional)
-
-Keys for type NOTIF\_STREAM\_EVENT (struct confd\_stream\_notification):
-
-* type (1)
-* error - if (1) is STREAM\_REPLAY\_FAILED
-* event\_time - if (1) is STREAM\_NOTIFICATION\_EVENT
-* values - if (1) is STREAM\_NOTIFICATION\_EVENT
-
-Keys for type NOTIF\_CQ\_PROGRESS (struct ncs\_cq\_progress\_notification):
-
-* type
-* timestamp
-* cq\_id
-* cq\_tag
-* label
-* completed\_devices (optional)
-* transient\_devices (optional)
-* failed\_devices (optional)
-* failed\_reasons - if failed\_devices is present
-* completed\_services (optional)
-* completed\_services\_completed\_devices - if completed\_services is present
-* failed\_services (optional)
-* failed\_services\_completed\_devices - if failed\_services is present
-* failed\_services\_failed\_devices - if failed\_services is present
-
-Keys for type NOTIF\_CALL\_HOME\_INFO (struct ncs\_call\_home\_notification):
-
-* type (1)
-* device - if (1) is CALL\_HOME\_DEVICE\_CONNECTED or CALL\_HOME\_DEVICE\_DISCONNECTED
-* af (2)
-* ip4 - if (2) is AF\_INET
-* ip6 - if (2) is AF\_INET6
-* port
-* ssh\_host\_key
-* ssh\_key\_alg
-
-### sync\_audit\_network\_notification
-
-```python
-sync_audit_network_notification(sock, usid) -> None
-```
-
-If the received event was NOTIF\_AUDIT\_NETWORK, and we are subscribing to notifications with the flag NOTIF\_AUDIT\_NETWORK\_SYNC, this function must be called when we are done processing the notification. The user session is hanging until this function gets called.
-
-Keyword arguments:
-
-* sock -- a previously connected notification socket
-* usid -- the user session id
-
-### sync\_audit\_notification
-
-```python
-sync_audit_notification(sock, usid) -> None
-```
-
-If the received event was NOTIF\_AUDIT, and we are subscribing to notifications with the flag NOTIF\_AUDIT\_SYNC, this function must be called when we are done processing the notification. The user session is hanging until this function gets called.
-
-Keyword arguments:
-
-* sock -- a previously connected notification socket
-* usid -- the user session id
-
-### sync\_ha\_notification
-
-```python
-sync_ha_notification(sock) -> None
-```
-
-If the received event was NOTIF\_HA\_INFO, and we are subscribing to notifications with the flag NOTIF\_HA\_INFO\_SYNC, this function must be called when we are done processing the notification. All HA processing is blocked until this function gets called.
-
-Keyword arguments:
-
-* sock -- a previously connected notification socket
-
-## Classes
-
-### _class_ **Notification**
-
-This is a placeholder for the c-type struct confd\_notification.
-
-Notification cannot be directly instantiated from Python.
-
-Members:
-
-_None_
-
-### _class_ **NotificationsData**
-
-This type represents the c-type struct confd\_notifications\_data.
-
-The contructor for this type has the following signature:
-
-NotificationsData(hearbeat\_interval, health\_check\_interval, stream\_name, start\_time, stop\_time, xpath\_filter, usid, verbosity) -> object
-
-Keyword arguments:
-
-* heartbeat\_interval -- time in milli seconds (int)
-* health\_check\_interval -- time in milli seconds (int)
-* stream\_name -- name of the notification stream (string)
-* start\_time -- the start time (Value)
-* stop\_time -- the stop time (Value)
-* xpath\_filter -- XPath filter for the stream (string) - optional
-* usid -- user session id for AAA restriction (int) - optional
-* verbosity -- progress verbosity level (int) - optional
-
-Members:
-
-_None_
-
-## Predefined Values
-
-```python
-
-ABORT_COMMIT = 3
-CALL_HOME_DEVICE_CONNECTED = 1
-CALL_HOME_DEVICE_DISCONNECTED = 3
-CALL_HOME_UNKNOWN_DEVICE = 2
-COMPACTION_AUTOMATIC = 1
-COMPACTION_A_CDB = 1
-COMPACTION_MANUAL = 2
-COMPACTION_O_CDB = 2
-COMPACTION_S_CDB = 3
-CONFIRMED_COMMIT = 1
-CONFIRMING_COMMIT = 2
-DP_CDB = 1
-DP_EXTERNAL = 3
-DP_JAVASCRIPT = 5
-DP_NETCONF = 2
-DP_SNMPGW = 4
-FORWARD_INFO_DOWN = 2
-FORWARD_INFO_FAILED = 3
-FORWARD_INFO_UP = 1
-HA_INFO_BESECONDARY_RESULT = 7
-HA_INFO_BESLAVE_RESULT = 7
-HA_INFO_IS_MASTER = 5
-HA_INFO_IS_NONE = 6
-HA_INFO_IS_PRIMARY = 5
-HA_INFO_NOMASTER = 1
-HA_INFO_NOPRIMARY = 1
-HA_INFO_SECONDARY_ARRIVED = 3
-HA_INFO_SECONDARY_DIED = 2
-HA_INFO_SECONDARY_INITIALIZED = 4
-HA_INFO_SLAVE_ARRIVED = 3
-HA_INFO_SLAVE_DIED = 2
-HA_INFO_SLAVE_INITIALIZED = 4
-NCS_CQ_ITEM_COMPLETED = 4
-NCS_CQ_ITEM_DELETED = 6
-NCS_CQ_ITEM_EXECUTING = 2
-NCS_CQ_ITEM_FAILED = 5
-NCS_CQ_ITEM_LOCKED = 3
-NCS_CQ_ITEM_WAITING = 1
-NCS_NOTIF_AUDIT_NETWORK = 268435456
-NCS_NOTIF_AUDIT_NETWORK_SYNC = 536870912
-NCS_NOTIF_CALL_HOME_INFO = 33554432
-NCS_NOTIF_CQ_PROGRESS = 4194304
-NCS_NOTIF_PACKAGE_RELOAD = 2097152
-NOTIF_AUDIT = 1
-NOTIF_AUDIT_SYNC = 131072
-NOTIF_COMMIT_DIFF = 16
-NOTIF_COMMIT_FAILED = 256
-NOTIF_COMMIT_FLAG_CONFIRMED = 1
-NOTIF_COMMIT_FLAG_CONFIRMED_EXTENDED = 2
-NOTIF_COMMIT_PROGRESS = 65536
-NOTIF_COMMIT_SIMPLE = 8
-NOTIF_COMPACTION = 1073741824
-NOTIF_CONFIRMED_COMMIT = 16384
-NOTIF_DAEMON = 2
-NOTIF_DEVEL = 4096
-NOTIF_FORWARD_INFO = 1024
-NOTIF_HA_INFO = 64
-NOTIF_HA_INFO_SYNC = 1048576
-NOTIF_HEALTH_CHECK = 262144
-NOTIF_HEARTBEAT = 8192
-NOTIF_JSONRPC = 67108864
-NOTIF_NETCONF = 2048
-NOTIF_PROGRESS = 16777216
-NOTIF_REOPEN_LOGS = 8388608
-NOTIF_SNMPA = 512
-NOTIF_STREAM_EVENT = 524288
-NOTIF_SUBAGENT_INFO = 128
-NOTIF_SYSLOG = 2
-NOTIF_SYSLOG_TAKEOVER = 6
-NOTIF_TAKEOVER_SYSLOG = 4
-NOTIF_UPGRADE_EVENT = 32768
-NOTIF_USER_SESSION = 32
-NOTIF_WEBUI = 134217728
-PROGRESS_ATTRIBUTE_NUMBER = 2
-PROGRESS_ATTRIBUTE_STRING = 1
-STREAM_NOTIFICATION_COMPLETE = 2
-STREAM_NOTIFICATION_EVENT = 1
-STREAM_REPLAY_COMPLETE = 3
-STREAM_REPLAY_FAILED = 4
-SUBAGENT_INFO_DOWN = 2
-SUBAGENT_INFO_UP = 1
-UPGRADE_ABORTED = 5
-UPGRADE_COMMITED = 4
-UPGRADE_INIT_STARTED = 1
-UPGRADE_INIT_SUCCEEDED = 2
-UPGRADE_PERFORMED = 3
-USER_SESS_LOCK = 3
-USER_SESS_START = 1
-USER_SESS_START_TRANS = 5
-USER_SESS_STOP = 2
-USER_SESS_STOP_TRANS = 6
-USER_SESS_UNLOCK = 4
-```
diff --git a/developer-reference/pyapi/_ncs.ha.md b/developer-reference/pyapi/_ncs.ha.md
deleted file mode 100644
index aede552b..00000000
--- a/developer-reference/pyapi/_ncs.ha.md
+++ /dev/null
@@ -1,142 +0,0 @@
-# \_ncs.ha Module
-
-Low level module for connecting to NCS HA subsystem.
-
-This module is used to connect to the NCS High Availability (HA) subsystem. NCS can replicate the configuration data on several nodes in a cluster. The purpose of this API is to manage the HA functionality. The details on usage of the HA API are described in the chapter High availability in the User Guide.
-
-This documentation should be read together with the [confd\_lib\_ha(3)](../../resources/man/confd_lib_ha.3.md) man page.
-
-## Functions
-
-### bemaster
-
-```python
-bemaster(sock, mynodeid) -> None
-```
-
-This function is deprecated and will be removed. Use beprimary() instead.
-
-### benone
-
-```python
-benone(sock) -> None
-```
-
-Instruct a node to resume the initial state, i.e. neither become primary nor secondary.
-
-Keyword arguments:
-
-* sock -- a previously connected HA socket
-
-### beprimary
-
-```python
-beprimary(sock, mynodeid) -> None
-```
-
-Instruct a HA node to be primary and also give the node a name.
-
-Keyword arguments:
-
-* sock -- a previously connected HA socket
-* mynodeid -- name of the node (Value or string)
-
-### berelay
-
-```python
-berelay(sock) -> None
-```
-
-Instruct an established HA secondary node to be a relay for other secondary nodes.
-
-Keyword arguments:
-
-* sock -- a previously connected HA socket
-
-### besecondary
-
-```python
-besecondary(sock, mynodeid, primary_id, primary_ip, waitreply) -> None
-```
-
-Instruct a NCS HA node to be a secondary node with a named primary node. If waitreply is True the function is synchronous and it will hang until the node has initialized its CDB database. This may mean that the CDB database is copied in its entirety from the primary node. If False, we do not wait for the reply, but it is possible to use a notifications socket and get notified asynchronously via a HA\_INFO\_BESECONDARY\_RESULT notification. In both cases, it is also possible to use a notifications socket and get notified asynchronously when CDB at the secondary node is initialized.
-
-Keyword arguments:
-
-* sock -- a previously connected HA socket
-* mynodeid -- name of this secondary node (Value or string)
-* primary\_id -- name of the primary node (Value or string)
-* primary\_ip -- ip address of the primary node
-* waitreply -- synchronous or not (bool)
-
-### beslave
-
-```python
-beslave(sock, mynodeid, primary_id, primary_ip, waitreply) -> None
-```
-
-This function is deprecated and will be removed. Use besecondary() instead.
-
-### connect
-
-```python
-connect(sock, token, ip, port, pstr) -> None
-```
-
-Connect a HA socket which can be used to control a NCS HA node. The token is a secret string that must be shared by all participants in the cluster. There can only be one HA socket towards NCS. A new call to ha\_connect() makes NCS close the previous connection and reset the token to the new value.
-
-Keyword arguments:
-
-* sock -- a Python socket instance
-* token -- secret string
-* ip -- the ip address if socket is AF\_INET or AF\_INET6 (optional)
-* port -- the port if socket is AF\_INET or AF\_INET6 (optional)
-* pstr -- a filename if socket is AF\_UNIX (optional).
-
-### secondary\_dead
-
-```python
-secondary_dead(sock, nodeid) -> None
-```
-
-This function must be used by the application to inform NCS HA subsystem that another node which is possibly connected to NCS is dead.
-
-Keyword arguments:
-
-* sock -- a previously connected HA socket
-* nodeid -- name of the node (Value or string)
-
-### slave\_dead
-
-```python
-slave_dead(sock, nodeid) -> None
-```
-
-This function is deprecated and will be removed. Use secondary\_dead() instead.
-
-### status
-
-```python
-status(sock) -> None
-```
-
-Query a ConfD HA node for its status.
-
-Returns a 2-tuple of the HA status of the node in the format (State,\[list\_of\_nodes]) where 'list\_of\_nodes' is the primary/secondary(s) connected with node.
-
-Keyword arguments:
-
-* sock -- a previously connected HA socket
-
-## Predefined Values
-
-```python
-
-STATE_MASTER = 3
-STATE_NONE = 1
-STATE_PRIMARY = 3
-STATE_SECONDARY = 2
-STATE_SECONDARY_RELAY = 4
-STATE_SLAVE = 2
-STATE_SLAVE_RELAY = 4
-```
diff --git a/developer-reference/pyapi/_ncs.maapi.md b/developer-reference/pyapi/_ncs.maapi.md
deleted file mode 100644
index 96264589..00000000
--- a/developer-reference/pyapi/_ncs.maapi.md
+++ /dev/null
@@ -1,3005 +0,0 @@
-# \_ncs.maapi Module
-
-Low level module for connecting to NCS with a read/write interface inside transactions.
-
-This module is used to connect to the NCS transaction manager. The API described here has several purposes. We can use MAAPI when we wish to implement our own proprietary management agent. We also use MAAPI to attach to already existing NCS transactions, for example when we wish to implement semantic validation of configuration data in Python, and also when we wish to implement CLI wizards in Python.
-
-This documentation should be read together with the [confd\_lib\_maapi(3)](../../resources/man/confd_lib_maapi.3.md) man page.
-
-## Functions
-
-### aaa\_reload
-
-```python
-aaa_reload(sock, synchronous) -> None
-```
-
-Start a reload of aaa from external data provider.
-
-Used by external data provider to notify that there is a change to the AAA data. Calling the function with the argument 'synchronous' set to 1 or True means that the call will block until the loading is completed.
-
-Keyword arguments:
-
-* sock -- a python socket instance
-* synchronous -- if 1, will wait for the loading complete and return when the loading is complete; if 0, will only initiate the loading of AAA data and return immediately
-
-### aaa\_reload\_path
-
-```python
-aaa_reload_path(sock, synchronous, path) -> None
-```
-
-Start a reload of aaa from external data provider.
-
-A variant of \_maapi\_aaa\_reload() that causes only the AAA subtree given by path to be loaded.
-
-Keyword arguments:
-
-* sock -- a python socket instance
-* synchronous -- if 1, will wait for the loading complete and return when the loading is complete; if 0, will only initiate the loading of AAA data and return immediately
-* path -- the subtree to be loaded
-
-### abort\_trans
-
-```python
-abort_trans(sock, thandle) -> None
-```
-
-Final phase of a two phase transaction, aborting the trans.
-
-Keyword arguments:
-
-* sock -- a python socket instance
-* thandle -- transaction handle
-
-### abort\_upgrade
-
-```python
-abort_upgrade(sock) -> None
-```
-
-Can be called before committing upgrade in order to abort it.
-
-Final step in an upgrade.
-
-Keyword arguments:
-
-* sock -- a python socket instance
-
-### apply\_template
-
-```python
-apply_template(sock, thandle, template, variables, flags, rootpath) -> None
-```
-
-Apply a template that has been loaded into NCS. The template parameter gives the name of the template. This is NOT a FASTMAP function, for that use shared\_ncs\_apply\_template instead.
-
-Keyword arguments:
-
-* sock -- a python socket instance
-* thandle -- transaction handle
-* template -- template name
-* variables -- None or a list of variables in the form of tuples
-* flags -- should be 0
-* rootpath -- in what context to apply the template
-
-### apply\_trans
-
-```python
-apply_trans(sock, thandle, keepopen) -> None
-```
-
-Apply a transaction.
-
-Validates, prepares and eventually commits or aborts the transaction. If the validation fails and the 'keep\_open' argument is set to 1 or True, the transaction is left open and the developer can react upon the validation errors.
-
-Keyword arguments:
-
-* sock -- a python socket instance
-* thandle -- transaction handle
-* keepopen -- if true, transaction is not discarded if validation fails
-
-### apply\_trans\_flags
-
-```python
-apply_trans_flags(sock, thandle, keepopen, flags) -> None
-```
-
-A variant of apply\_trans() that takes an additional 'flags' argument.
-
-Keyword arguments:
-
-* sock -- a python socket instance
-* thandle -- transaction handle
-* keepopen -- if true, transaction is not discarded if validation fails
-* flags -- flags to set in the transaction
-
-### apply\_trans\_params
-
-```python
-apply_trans_params(sock, thandle, keepopen, params) -> list
-```
-
-A variant of apply\_trans() that takes commit parameters in form of a list ofTagValue objects and returns a list of TagValue objects depending on theparameters passed in.
-
-Keyword arguments:
-
-* sock -- a python socket instance
-* thandle -- transaction handle
-* keepopen -- if true, transaction is not discarded if validation fails
-* params -- list of TagValue objects
-
-### attach
-
-```python
-attach(sock, hashed_ns, ctx) -> None
-```
-
-Attach to a existing transaction.
-
-Keyword arguments:
-
-* sock -- a python socket instance
-* hashed\_ns -- the namespace to use
-* ctx -- transaction context
-
-### attach2
-
-```python
-attach2(sock, hashed_ns, usid, thandle) -> None
-```
-
-Used when there is no transaction context beforehand, to attach to a existing transaction.
-
-Keyword arguments:
-
-* sock -- a python socket instance
-* hashed\_ns -- the namespace to use
-* usid -- user session id, can be set to 0 to use the owner of the transaction
-* thandle -- transaction handle
-
-### attach\_init
-
-```python
-attach_init(sock) -> int
-```
-
-Attach the \_MAAPI socket to the special transaction available during phase0. Returns the thandle as an integer.
-
-Keyword arguments:
-
-* sock -- a python socket instance
-
-### authenticate
-
-```python
-authenticate(sock, user, password, n) -> tuple
-```
-
-Authenticate a user session. Use the 'n' to get a list of n-1 groups that the user is a member of. Use n=1 if the function is used in a context where the group names are not needed. Returns 1 if accepted without groups. If the authentication failed or was accepted a tuple with first element status code, 0 for rejection and 1 for accepted is returned. The second element either contains the reason for the rejection as a string OR a list groupnames.
-
-Keyword arguments:
-
-* sock -- a python socket instance
-* user -- username
-* pass -- password
-* n -- number of groups to return
-
-### authenticate2
-
-```python
-authenticate2(sock, user, password, src_addr, src_port, context, prot, n) -> tuple
-```
-
-This function does the same thing as maapi.authenticate(), but allows for passing of the additional parameters src\_addr, src\_port, context, and prot, which otherwise are passed only to maapi\_start\_user\_session()/ maapi\_start\_user\_session2(). The parameters are passed on to an external authentication executable. Keyword arguments:
-
-* sock -- a python socket instance
-* user -- username
-* pass -- password
-* src\_addr -- ip address
-* src\_port -- port number
-* context -- context for the session
-* prot -- the protocol used by the client for connecting
-* n -- number of groups to return
-
-### candidate\_abort\_commit
-
-```python
-candidate_abort_commit(sock) -> None
-```
-
-Cancel an ongoing confirmed commit.
-
-Keyword arguments:
-
-* sock -- a python socket instance
-
-### candidate\_abort\_commit\_persistent
-
-```python
-candidate_abort_commit_persistent(sock, persist_id) -> None
-```
-
-Cancel an ongoing confirmed commit with the cookie given by persist\_id.
-
-Keyword arguments:
-
-* sock -- a python socket instance
-* persist\_id -- gives the cookie for an already ongoing persistent confirmed commit
-
-### candidate\_commit
-
-```python
-candidate_commit(sock) -> None
-```
-
-This function copies the candidate to running.
-
-Keyword arguments:
-
-* sock -- a python socket instance
-
-### candidate\_commit\_info
-
-```python
-candidate_commit_info(sock, persist_id, label, comment) -> None
-```
-
-Commit the candidate to running, or confirm an ongoing confirmed commit, and set the Label and/or Comment that is stored in the rollback file when the candidate is committed to running.
-
-Note:
-
-> To ensure the Label and/or Comment are stored in the rollback file in all cases when doing a confirmed commit, they must be given with both, the confirmed commit (using maapi\_candidate\_confirmed\_commit\_info()) and the confirming commit (using this function).
-
-Keyword arguments:
-
-* sock -- a python socket instance
-* persist\_id -- gives the cookie for an already ongoing persistent confirmed commit
-* label -- the Label
-* comment -- the Comment
-
-### candidate\_commit\_persistent
-
-```python
-candidate_commit_persistent(sock, persist_id) -> None
-```
-
-Confirm an ongoing persistent commit with the cookie given by persist\_id.
-
-Keyword arguments:
-
-* sock -- a python socket instance
-* persist\_id -- gives the cookie for an already ongoing persistent confirmed commit
-
-### candidate\_confirmed\_commit
-
-```python
-candidate_confirmed_commit(sock, timeoutsecs) -> None
-```
-
-This function also copies the candidate into running. However if a call to maapi\_candidate\_commit() is not done within timeoutsecs an automatic rollback will occur.
-
-Keyword arguments:
-
-* sock -- a python socket instance
-* timeoutsecs -- timeout in seconds
-
-### candidate\_confirmed\_commit\_info
-
-```python
-candidate_confirmed_commit_info(sock, timeoutsecs, persist, persist_id, label, comment) -> None
-```
-
-Like candidate\_confirmed\_commit\_persistent, but also allows for setting the Label and/or Comment that is stored in the rollback file when the candidate is committed to running.
-
-Note:
-
-> To ensure the Label and/or Comment are stored in the rollback file in all cases when doing a confirmed commit, they must be given with both, the confirmed commit (using this function) and the confirming commit (using candidate\_commit\_info()).
-
-Keyword arguments:
-
-* sock -- a python socket instance
-* timeoutsecs -- timeout in seconds
-* persist -- sets the cookie for the persistent confirmed commit
-* persist\_id -- gives the cookie for an already ongoing persistent confirmed commit
-* label -- the Label
-* comment -- the Comment
-
-### candidate\_confirmed\_commit\_persistent
-
-```python
-candidate_confirmed_commit_persistent(sock, timeoutsecs, persist, persist_id) -> None
-```
-
-Start or extend a confirmed commit using persist id.
-
-Keyword arguments:
-
-* sock -- a python socket instance
-* timeoutsecs -- timeout in seconds
-* persist -- sets the cookie for the persistent confirmed commit
-* persist\_id -- gives the cookie for an already ongoing persistent confirmed commit
-
-### candidate\_reset
-
-```python
-candidate_reset(sock) -> None
-```
-
-Copy running into candidate.
-
-Keyword arguments:
-
-* sock -- a python socket instance
-
-### candidate\_validate
-
-```python
-candidate_validate(sock) -> None
-```
-
-This function validates the candidate.
-
-Keyword arguments:
-
-* sock -- a python socket instance
-
-### cd
-
-```python
-cd(sock, thandle, path) -> None
-```
-
-Change current position in the tree.
-
-Keyword arguments:
-
-* sock -- a python socket instance
-* thandle -- transaction handle
-* path -- position to change to
-
-### clear\_opcache
-
-```python
-clear_opcache(sock, path) -> None
-```
-
-Clearing of operational data cache.
-
-Keyword arguments:
-
-* sock -- a python socket instance
-* path -- the path to the subtree to clear
-
-### cli\_accounting
-
-```python
-cli_accounting(sock, user, usid, cmdstr) -> None
-```
-
-Generates an audit log entry in the CLI audit log.
-
-Keyword arguments:
-
-* sock -- a python socket instance
-* user -- user to generate the entry for
-* thandle -- transaction handle
-
-### cli\_cmd
-
-```python
-cli_cmd(sock, usess, buf) -> None
-```
-
-Execute CLI command in the ongoing CLI session.
-
-Keyword arguments:
-
-* sock -- a python socket instance
-* usess -- user session
-* buf -- string to write
-
-### cli\_cmd2
-
-```python
-cli_cmd2(sock, usess, buf, flags) -> None
-```
-
-Execute CLI command in a ongoing CLI session. With flags: CMD\_NO\_FULLPATH - Do not perform the fullpath check on show commands. CMD\_NO\_HIDDEN - Allows execution of hidden CLI commands.
-
-Keyword arguments:
-
-* sock -- a python socket instance
-* usess -- user session
-* buf -- string to write
-* flags -- as above
-
-### cli\_cmd3
-
-```python
-cli_cmd3(sock, usess, buf, flags, unhide) -> None
-```
-
-Execute CLI command in a ongoing CLI session.
-
-Keyword arguments:
-
-* sock -- a python socket instance
-* usess -- user session
-* buf -- string to write
-* flags -- as above
-* unhide -- The unhide parameter is used for passing a hide group which is unhidden during the execution of the command.
-
-### cli\_cmd4
-
-```python
-cli_cmd4(sock, usess, buf, flags, unhide) -> None
-```
-
-Execute CLI command in a ongoing CLI session.
-
-Keyword arguments:
-
-* sock -- a python socket instance
-* usess -- user session
-* buf -- string to write
-* flags -- as above
-* unhide -- The unhide parameter is used for passing a hide group which is unhidden during the execution of the command.
-
-### cli\_cmd\_to\_path
-
-```python
-cli_cmd_to_path(sock, line, nsize, psize) -> tuple
-```
-
-Returns string of the C/I namespaced CLI path that can be associated with the given command. Returns a tuple ns and path.
-
-Keyword arguments:
-
-* sock -- a python socket instance
-* line -- data model path as string
-* nsize -- limit length of namespace
-* psize -- limit length of path
-
-### cli\_cmd\_to\_path2
-
-```python
-cli_cmd_to_path2(sock, thandle, line, nsize, psize) -> tuple
-```
-
-Returns string of the C/I namespaced CLI path that can be associated with the given command. In the context of the provided transaction handle. Returns a tuple ns and path.
-
-Keyword arguments:
-
-* sock -- a python socket instance
-* thandle -- transaction handle
-* line -- data model path as string
-* nsize -- limit length of namespace
-* psize -- limit length of path
-
-### cli\_diff\_cmd
-
-```python
-cli_diff_cmd(sock, thandle, thandle_old, flags, path, size) -> str
-```
-
-Get the diff between two sessions as a series C/I cli commands. Returns a string. If no changes exist between the two sessions for the given path a \_ncs.error.Error will be thrown with the error set to ERR\_BADPATH
-
-Keyword arguments:
-
-* sock -- a python socket instance
-* thandle -- transaction handle
-* thandle\_old -- transaction handle
-* flags -- as for cli\_path\_cmd
-* path -- as for cli\_path\_cmd
-* size -- limit diff
-
-### cli\_get
-
-```python
-cli_get(sock, usess, opt, size) -> str
-```
-
-Read CLI session parameter or attribute.
-
-Keyword arguments:
-
-* sock -- a python socket instance
-* usess -- user session
-* opt -- option to get
-* size -- maximum response size (optional, default 1024)
-
-### cli\_path\_cmd
-
-```python
-cli_path_cmd(sock, thandle, flags, path, size) -> str
-```
-
-Returns string of the C/I CLI command that can be associated with the given path. The flags can be given as FLAG\_EMIT\_PARENTS to enable the commands to reach the submode for the path to be emitted. The flags can be given as FLAG\_DELETE to emit the command to delete the given path. The flags can be given as FLAG\_NON\_RECURSIVE to prevent that all children to a container or list item are displayed.
-
-Keyword arguments:
-
-* sock -- a python socket instance
-* thandle -- transaction handle
-* flags -- as above
-* path -- the path for the cmd
-* size -- limit cmd
-
-### cli\_prompt
-
-```python
-cli_prompt(sock, usess, prompt, echo, size) -> str
-```
-
-Prompt user for a string.
-
-Keyword arguments:
-
-* sock -- a python socket instance
-* usess -- user session
-* prompt -- string to show the user
-* echo -- determines wether to control if the input should be echoed or not. ECHO shows the input, NOECHO does not
-* size -- maximum response size (optional, default 1024)
-
-### cli\_set
-
-```python
-cli_set(sock, usess, opt, value) -> None
-```
-
-Set CLI session parameter.
-
-Keyword arguments:
-
-* sock -- a python socket instance
-* usess -- user session
-* opt -- option to set
-* value -- the new value of the session parameter
-
-### cli\_write
-
-```python
-cli_write(sock, usess, buf) -> None
-```
-
-Write to the cli.
-
-Keyword arguments:
-
-* sock -- a python socket instance
-* usess -- user session
-* buf -- string to write
-
-### close
-
-```python
-close(sock) -> None
-```
-
-Ends session and closes socket.
-
-Keyword arguments:
-
-* sock -- a python socket instance
-
-### commit\_trans
-
-```python
-commit_trans(sock, thandle) -> None
-```
-
-Final phase of a two phase transaction, committing the trans.
-
-Keyword arguments:
-
-* sock -- a python socket instance
-* thandle -- transaction handle
-
-### commit\_upgrade
-
-```python
-commit_upgrade(sock) -> None
-```
-
-Final step in an upgrade.
-
-Keyword arguments:
-
-* sock -- a python socket instance
-
-### confirmed\_commit\_in\_progress
-
-```python
-confirmed_commit_in_progress(sock) -> int
-```
-
-Checks whether a confirmed commit is ongoing. Returns a positive integer being the usid of confirmed commit operation in progress or 0 if no confirmed commit is in progress.
-
-Keyword arguments:
-
-* sock -- a python socket instance
-
-### connect
-
-```python
-connect(sock, ip, port, path) -> None
-```
-
-Connect to the system daemon.
-
-Keyword arguments:
-
-* sock -- a python socket instance
-* ip -- the ip address
-* port -- the port
-* path -- the path if socket is AF\_UNIX (optional)
-
-### copy
-
-```python
-copy(sock, from_thandle, to_thandle) -> None
-```
-
-Copy all data from one data store to another.
-
-Keyword arguments:
-
-* sock -- a python socket instance
-* from\_thandle -- transaction handle
-* to\_thandle -- transaction handle
-
-### copy\_path
-
-```python
-copy_path(sock, from_thandle, to_thandle, path) -> None
-```
-
-Copy subtree rooted at path from one data store to another.
-
-Keyword arguments:
-
-* sock -- a python socket instance
-* from\_thandle -- transaction handle
-* to\_thandle -- transaction handle
-* path -- the subtree rooted at path is copied
-
-### copy\_running\_to\_startup
-
-```python
-copy_running_to_startup(sock) -> None
-```
-
-Copies running to startup.
-
-Keyword arguments:
-
-* sock -- a python socket instance
-
-### copy\_tree
-
-```python
-copy_tree(sock, thandle, frompath, topath) -> None
-```
-
-Copy subtree rooted at frompath to topath.
-
-Keyword arguments:
-
-* sock -- a python socket instance
-* frompath -- the subtree rooted at path is copied
-* topath -- to which path the subtree is copied
-
-### create
-
-```python
-create(sock, thandle, path) -> None
-```
-
-Create a new list entry, a presence container or a leaf of type empty (unless in a union, if type empty is in a union use set\_elem instead) in the data tree.
-
-Keyword arguments:
-
-* sock -- a python socket instance
-* thandle -- transaction handle
-* path -- path of item to create
-
-### cs\_node\_cd
-
-```python
-cs_node_cd(socket, thandle, path) -> Union[_ncs.CsNode, None]
-```
-
-Utility function which finds the resulting CsNode given a string keypath.
-
-Does the same thing as \_ncs.cs\_node\_cd(), but can handle paths that are ambiguous due to traversing a mount point, by sending a request to the daemon
-
-Keyword arguments:
-
-* sock -- a python socket instance
-* thandle -- transaction handle
-* path -- the keypath
-
-### cs\_node\_children
-
-```python
-cs_node_children(sock, thandle, mount_point, path) -> List[_ncs.CsNode]
-```
-
-Retrieve a list of the children nodes of the node given by mount\_point that are valid for path. The mount\_point node must be a mount point (i.e. mount\_point.is\_mount\_point() == True), and the path must lead to a specific instance of this node (including the final keys if mount\_point is a list node). The thandle parameter is optional, i.e. it can be given as -1 if a transaction is not available.
-
-Keyword arguments:
-
-* sock -- a python socket instance
-* thandle -- transaction handle
-* mount\_point -- a CsNode instance
-* path -- the path to the instance of the node
-
-### delete
-
-```python
-delete(sock, thandle, path) -> None
-```
-
-Delete an existing list entry, a presence container or a leaf of type empty from the data tree.
-
-Keyword arguments:
-
-* sock -- a python socket instance
-* thandle -- transaction handle
-* path -- path of item to delete
-
-### delete\_all
-
-```python
-delete_all(sock, thandle, how) -> None
-```
-
-Delete all data within a transaction.
-
-The how argument specifies how to delete: DEL\_SAFE - Delete everything except namespaces that were exported with tailf:export none. Top-level nodes that cannot be deleted due to AAA rules are left in place (descendant nodes may be deleted if the rules allow it). DEL\_EXPORTED - As DEL\_SAFE, but AAA rules are ignored. DEL\_ALL - Delete everything, AAA rules are ignored.
-
-Keyword arguments:
-
-* sock -- a python socket instance
-* thandle -- transaction handle
-* how -- DEL\_SAFE, DEL\_EXPORTED or DEL\_ALL
-
-### delete\_config
-
-```python
-delete_config(sock, name) -> None
-```
-
-Empties a datastore.
-
-Keyword arguments:
-
-* sock -- a python socket instance
-* name -- name of the datastore to empty
-
-### destroy\_cursor
-
-```python
-destroy_cursor(mc) -> None
-```
-
-Deallocates memory which is associated with the cursor.
-
-Keyword arguments:
-
-* mc -- maapiCursor
-
-### detach
-
-```python
-detach(sock, ctx) -> None
-```
-
-Detaches an attached \_MAAPI socket.
-
-Keyword arguments:
-
-* sock -- a python socket instance
-* ctx -- transaction context
-
-### detach2
-
-```python
-detach2(sock, thandle) -> None
-```
-
-Detaches an attached \_MAAPI socket when we do not have a transaction context available.
-
-Keyword arguments:
-
-* sock -- a python socket instance
-* thandle -- transaction handle
-
-### diff\_iterate
-
-```python
-diff_iterate(sock, thandle, iter, flags) -> None
-```
-
-Iterate through a transaction diff.
-
-For each diff in the transaction the callback function 'iter' will be called. The iter function needs to have the following signature:
-
-```
-def iter(keypath, operation, oldvalue, newvalue)
-```
-
-Where arguments are:
-
-* keypath - the affected path (HKeypathRef)
-* operation - one of MOP\_CREATED, MOP\_DELETED, MOP\_MODIFIED, MOP\_VALUE\_SET, MOP\_MOVED\_AFTER, or MOP\_ATTR\_SET
-* oldvalue - always None
-* newvalue - see below
-
-The 'newvalue' argument may be set for operation MOP\_VALUE\_SET and is a Value object in that case. For MOP\_MOVED\_AFTER it may be set to a list of key values identifying an entry in the list - if it's None the list entry has been moved to the beginning of the list. For MOP\_ATTR\_SET it will be set to a 2-tuple of Value's where the first Value is the attribute set and the second Value is the value the attribute was set to. If the attribute has been deleted the second value is of type C\_NOEXISTS
-
-The iter function should return one of:
-
-* ITER\_STOP - Stop further iteration
-* ITER\_RECURSE - Recurse further down the node children
-* ITER\_CONTINUE - Ignore node children and continue with the node's siblings
-
-One could also define a class implementing the call function as:
-
-```
-class DiffIterator(object):
- def __init__(self):
- self.count = 0
-
- def __call__(self, kp, op, oldv, newv):
- print('kp={0}, op={1}, oldv={2}, newv={3}'.format(
- str(kp), str(op), str(oldv), str(newv)))
- self.count += 1
- return _confd.ITER_RECURSE
-```
-
-Keyword arguments:
-
-* sock -- a python socket instance
-* thandle -- transaction handle
-* iter -- iterator function, will be called for every diff in the transaction
-* flags -- bitmask of ITER\_WANT\_ATTR and ITER\_WANT\_P\_CONTAINER
-
-### disconnect\_remote
-
-```python
-disconnect_remote(sock, address) -> None
-```
-
-Disconnect all remote connections to 'address' except HA connections.
-
-Keyword arguments:
-
-* sock -- a python socket instance
-* address -- ip address (string)
-
-### disconnect\_sockets
-
-```python
-disconnect_sockets(sock, sockets) -> None
-```
-
-Disconnect 'sockets' which is a list of sockets (fileno).
-
-Keyword arguments:
-
-* sock -- a python socket instance
-* sockets -- list of sockets (int)
-
-### do\_display
-
-```python
-do_display(sock, thandle, path) -> int
-```
-
-If the data model uses the YANG when or tailf:display-when statement, this function can be used to determine if the item given by 'path' should be displayed or not.
-
-Keyword arguments:
-
-* sock -- a python socket instance
-* thandle -- transaction handle
-* path -- path to the 'display-when' statement
-
-### end\_progress\_span
-
-```python
-end_progress_span(sock, span, annotation) -> int
-```
-
-Ends a progress span started from start\_progress\_span() or start\_progress\_span\_th().
-
-Keyword arguments:
-
-* sock -- a python socket instance
-* span -- span\_id (string) or dict with key 'span\_id'
-* annotation -- metadata about the event, indicating error, explains latency or shows result etc
-
-### end\_user\_session
-
-```python
-end_user_session(sock) -> None
-```
-
-End the MAAPI user session associated with the socket
-
-Keyword arguments:
-
-* sock -- a python socket instance
-
-### exists
-
-```python
-exists(sock, thandle, path) -> bool
-```
-
-Check wether a node in the data tree exists. Returns boolean.
-
-Keyword arguments:
-
-* sock -- a python socket instance
-* thandle -- transaction handle
-* path -- position to check
-
-### find\_next
-
-```python
-find_next(mc, type, inkeys) -> Union[List[_ncs.Value], bool]
-```
-
-Update the cursor mc with the key(s) for the list entry designated by the type and inkeys parameters. This function may be used to start a traversal from an arbitrary entry in a list. Keys for subsequent entries may be retrieved with the get\_next() function. When no more keys are found, False is returned.
-
-The strategy to use is defined by type:
-
-```
-FIND_NEXT - The keys for the first list entry after the one
- indicated by the inkeys argument.
-FIND_SAME_OR_NEXT - If the values in the inkeys array completely
- identifies an actual existing list entry, the keys for
- this entry are requested. Otherwise the same logic as
- for FIND_NEXT above.
-```
-
-Keyword arguments:
-
-* mc -- maapiCursor
-* type -- CONFD\_FIND\_NEXT or CONFD\_FIND\_SAME\_OR\_NEXT
-* inkeys -- where to start finding
-
-### finish\_trans
-
-```python
-finish_trans(sock, thandle) -> None
-```
-
-Finish a transaction.
-
-If the transaction is implemented by an external database, this will invoke the finish() callback.
-
-Keyword arguments:
-
-* sock -- a python socket instance
-* thandle -- transaction handle
-
-### get\_attrs
-
-```python
-get_attrs(sock, thandle, attrs, keypath) -> list
-```
-
-Get attributes for a node. Returns a list of attributes.
-
-Keyword arguments:
-
-* sock -- a python socket instance
-* thandle -- transaction handle
-* attrs -- list of type of attributes to get
-* keypath -- path to choice
-
-### get\_authorization\_info
-
-```python
-get_authorization_info(sock, usessid) -> _ncs.AuthorizationInfo
-```
-
-This function retrieves authorization info for a user session,i.e. the groups that the user has been assigned to.
-
-Keyword arguments:
-
-* sock -- a python socket instance
-* usessid -- user session id
-
-### get\_case
-
-```python
-get_case(sock, thandle, choice, keypath) -> _ncs.Value
-```
-
-Get the case from a YANG choice statement.
-
-Keyword arguments:
-
-* sock -- a python socket instance
-* thandle -- transaction handle
-* choice -- choice name
-* keypath -- path to choice
-
-### get\_elem
-
-```python
-get_elem(sock, thandle, path) -> _ncs.Value
-```
-
-Path must be a valid leaf node in the data tree. Returns a Value object.
-
-Keyword arguments:
-
-* sock -- a python socket instance
-* thandle -- transaction handle
-* path -- position of elem
-
-### get\_my\_user\_session\_id
-
-```python
-get_my_user_session_id(sock) -> int
-```
-
-Returns user session id
-
-Keyword arguments:
-
-* sock -- a python socket instance
-
-### get\_next
-
-```python
-get_next(mc) -> Union[List[_ncs.Value], bool]
-```
-
-Iterates and gets the keys for the next entry in a list. When no more keys are found, False is returned.
-
-Keyword arguments:
-
-* mc -- maapiCursor
-
-### get\_object
-
-```python
-get_object(sock, thandle, n, keypath) -> List[_ncs.Value]
-```
-
-Read at most n values from keypath in a list.
-
-Keyword arguments:
-
-* sock -- a python socket instance
-* thandle -- transaction handle
-* path -- position of list entry
-
-### get\_objects
-
-```python
-get_objects(mc, n, nobj) -> List[_ncs.Value]
-```
-
-Read at most n values from each nobj lists starting at Cursor mc. Returns a list of Value's.
-
-Keyword arguments:
-
-* mc -- maapiCursor
-* n -- at most n values will be read
-* nobj -- number of nobj lists which n elements will be taken from
-
-### get\_rollback\_id
-
-```python
-get_rollback_id(sock, thandle) -> int
-```
-
-Get rollback id from a committed transaction. Returns int with fixed id, where -1 indicates an error or no rollback id available.
-
-Keyword arguments:
-
-* sock -- a python socket instance
-* thandle -- transaction handle
-
-### get\_running\_db\_status
-
-```python
-get_running_db_status(sock) -> int
-```
-
-If a transaction fails in the commit() phase, the configuration database is in in a possibly inconsistent state. This function queries ConfD on the consistency state. Returns 1 if the configuration is consistent and 0 otherwise.
-
-Keyword arguments:
-
-* sock -- a python socket instance
-
-### get\_schema\_file\_path
-
-```python
-get_schema_file_path(sock) -> str
-```
-
-If shared memory schema support has been enabled, this function will return the pathname of the file used for the shared memory mapping, which can then be passed to the mmap\_schemas() function>
-
-If creation of the schema file is in progress when the function is called, the call will block until the creation has completed.
-
-Keyword arguments:
-
-* sock -- a python socket instance
-
-### get\_stream\_progress
-
-```python
-get_stream_progress(sock, id) -> int
-```
-
-Used in conjunction with a maapi stream to see how much data has been consumed.
-
-This function allows us to limit the amount of data 'in flight' between the application and the system. The sock parameter must be the maapi socket used for a function call that required a stream socket for writing (currently the only such function is load\_config\_stream()), and the id parameter is the id returned by that function.
-
-Keyword arguments:
-
-* sock -- a python socket instance
-* id -- the id returned from load\_config\_stream()
-
-### get\_templates
-
-```python
-get_templates(sock) -> list
-```
-
-Get the defined templates.
-
-Keyword arguments:
-
-* sock -- a python socket instance
-
-### get\_trans\_params
-
-```python
-get_trans_params(sock, thandle) -> list
-```
-
-Get the commit parameters for a transaction. The commit parameters are returned as a list of TagValue objects.
-
-Keyword arguments:
-
-* sock -- a python socket instance
-* thandle -- transaction handle
-
-### get\_user\_session
-
-```python
-get_user_session(sock, usessid) -> _ncs.UserInfo
-```
-
-Return user info.
-
-Keyword arguments:
-
-* sock -- a python socket instance
-* usessid -- session id
-
-### get\_user\_session\_identification
-
-```python
-get_user_session_identification(sock, usessid) -> dict
-```
-
-Get user session identification data.
-
-Get the user identification data related to a user session provided by the 'usessid' argument. The function returns a dict with the user identification data.
-
-Keyword arguments:
-
-* sock -- a python socket instance
-* usessid -- user session id
-
-### get\_user\_session\_opaque
-
-```python
-get_user_session_opaque(sock, usessid) -> str
-```
-
-Returns a string containing additional 'opaque' information, if additional 'opaque' information is available.
-
-Keyword arguments:
-
-* sock -- a python socket instance
-* usessid -- user session id
-
-### get\_user\_sessions
-
-```python
-get_user_sessions(sock) -> list
-```
-
-Return a list of session ids.
-
-Keyword arguments:
-
-* sock -- a python socket instance
-
-### get\_values
-
-```python
-get_values(sock, thandle, values, keypath) -> list
-```
-
-Get values from keypath based on the Tag Value array values.
-
-Keyword arguments:
-
-* sock -- a python socket instance
-* thandle -- transaction handle
-* values -- list of tagValues
-
-### getcwd
-
-```python
-getcwd(sock, thandle) -> str
-```
-
-Get the current position in the tree as a string.
-
-Keyword arguments:
-
-* sock -- a python socket instance
-* thandle -- transaction handle
-
-### getcwd\_kpath
-
-```python
-getcwd_kpath(sock, thandle) -> _ncs.HKeypathRef
-```
-
-Get the current position in the tree as a HKeypathRef.
-
-Keyword arguments:
-
-* sock -- a python socket instance
-* thandle -- transaction handle
-
-### hide\_group
-
-```python
-hide_group(sock, thandle, group_name) -> None
-```
-
-Hide all nodes belonging to a hide group in a transaction that started with flag FLAG\_HIDE\_ALL\_HIDEGROUPS.
-
-Keyword arguments:
-
-* sock -- a python socket instance
-* thandle -- transaction handle
-* group\_name -- the group name
-
-### init\_cursor
-
-```python
-init_cursor(sock, thandle, path) -> maapi.Cursor
-```
-
-Whenever we wish to iterate over the entries in a list in the data tree, we must first initialize a cursor.
-
-Keyword arguments:
-
-* sock -- a python socket instance
-* thandle -- transaction handle
-* path -- position of elem
-* secondary\_index -- name of secondary index to use (optional)
-* xpath\_expr -- xpath expression used to filter results (optional)
-
-### init\_upgrade
-
-```python
-init_upgrade(sock, timeoutsecs, flags) -> None
-```
-
-First step in an upgrade, initializes the upgrade procedure.
-
-Keyword arguments:
-
-* sock -- a python socket instance
-* timeoutsecs -- maximum time to wait for user to voluntarily exit from 'configuration' mode
-* flags -- 0 or 'UPGRADE\_KILL\_ON\_TIMEOUT' (will terminate all ongoing transactions
-
-### insert
-
-```python
-insert(sock, thandle, path) -> None
-```
-
-Insert a new entry in a list, the key of the list must be a integer.
-
-Keyword arguments:
-
-* sock -- a python socket instance
-* thandle -- transaction handle
-* path -- the subtree rooted at path is copied
-
-### install\_crypto\_keys
-
-```python
-install_crypto_keys(sock) -> None
-```
-
-Copy configured AES keys into the memory in the library.
-
-Keyword arguments:
-
-* sock -- a python socket instance
-
-### is\_candidate\_modified
-
-```python
-is_candidate_modified(sock) -> bool
-```
-
-Checks if candidate is modified.
-
-Keyword arguments:
-
-* sock -- a python socket instance
-
-### is\_lock\_set
-
-```python
-is_lock_set(sock, name) -> int
-```
-
-Check if db name is locked. Return the 'usid' of the user holding the lock or 0 if not locked.
-
-Keyword arguments:
-
-* sock -- a python socket instance
-
-### is\_running\_modified
-
-```python
-is_running_modified(sock) -> bool
-```
-
-Checks if running is modified.
-
-Keyword arguments:
-
-* sock -- a python socket instance
-
-### iterate
-
-```python
-iterate(sock, thandle, iter, flags, path) -> None
-```
-
-Used to iterate over all the data in a transaction and the underlying data store as opposed to only iterate over changes like diff\_iterate.
-
-Keyword arguments:
-
-* sock -- a python socket instance
-* thandle -- transaction handle
-* iter -- iterator function, will be called for every diff in the transaction
-* flags -- ITER\_WANT\_ATTR or 0
-* path -- receive only changes from this path and below
-
-The iter callback function should have the following signature:
-
-```
-def my_iterator(kp, v, attr_vals)
-```
-
-### keypath\_diff\_iterate
-
-```python
-keypath_diff_iterate(sock, thandle, iter, flags, path) -> None
-```
-
-Like diff\_iterate but takes an additional path argument.
-
-Keyword arguments:
-
-* sock -- a python socket instance
-* thandle -- transaction handle
-* iter -- iterator function, will be called for every diff in the transaction
-* flags -- bitmask of ITER\_WANT\_ATTR and ITER\_WANT\_P\_CONTAINER
-* path -- receive only changes from this path and below
-
-### kill\_user\_session
-
-```python
-kill_user_session(sock, usessid) -> None
-```
-
-Kill MAAPI user session with session id.
-
-Keyword arguments:
-
-* sock -- a python socket instance
-* usessid -- the MAAPI session id to be killed
-
-### load\_config
-
-```python
-load_config(sock, thandle, flags, filename) -> None
-```
-
-Loads configuration from 'filename'. The caller of the function has to indicate which format the file has by using one of the following flags:
-
-```
- CONFIG_XML -- XML format
- CONFIG_J -- Juniper curly bracket style
- CONFIG_C -- Cisco XR style
- CONFIG_TURBO_C -- A faster version of CONFIG_C
- CONFIG_C_IOS -- Cisco IOS style
-```
-
-Keyword arguments:
-
-* sock -- a python socket instance
-* thandle -- a transaction handle
-* flags -- as above
-* filename -- to read the configuration from
-
-### load\_config\_cmds
-
-```python
-load_config_cmds(sock, thandle, flags, cmds, path) -> None
-```
-
-Loads configuration from the string 'cmds'
-
-Keyword arguments:
-
-* sock -- a python socket instance
-* thandle -- a transaction handle
-* cmds -- a string of cmds
-* flags -- as above
-
-### load\_config\_stream
-
-```python
-load_config_stream(sock, th, flags) -> int
-```
-
-Loads configuration from the stream socket. The th and flags parameters are the same as for load\_config(). Returns and id.
-
-Keyword arguments:
-
-* sock -- a python socket instance
-* thandle -- a transaction handle
-* flags -- as for load\_config()
-
-### load\_config\_stream\_result
-
-```python
-load_config_stream_result(sock, id) -> int
-```
-
-We use this function to verify that the configuration we wrote on the stream socket was successfully loaded.
-
-Keyword arguments:
-
-* sock -- a python socket instance
-* id -- the id returned from load\_config\_stream()
-
-### load\_schemas
-
-```python
-load_schemas(sock) -> None
-```
-
-Loads all schema information into the lib.
-
-Keyword arguments:
-
-* sock -- a python socket instance
-
-### load\_schemas\_list
-
-```python
-load_schemas_list(sock, flags, nshash, nsflags) -> None
-```
-
-Loads selected schema information into the lib.
-
-Keyword arguments:
-
-* sock -- a python socket instance
-* flags -- the flags to set
-* nshash -- the listed namespaces that schema information should be loaded for
-* nsflags -- namespace specific flags
-
-### lock
-
-```python
-lock(sock, name) -> None
-```
-
-Lock database with name.
-
-Keyword arguments:
-
-* sock -- a python socket instance
-* name -- name of the database to lock
-
-### lock\_partial
-
-```python
-lock_partial(sock, name, xpaths) -> int
-```
-
-Lock a subset (xpaths) of database name. Returns lockid.
-
-Keyword arguments:
-
-* sock -- a python socket instance
-* xpaths -- a list of strings
-
-### move
-
-```python
-move(sock, thandle, tokey, path) -> None
-```
-
-Moves an existing list entry, i.e. renames the entry using the tokey parameter.
-
-Keyword arguments:
-
-* sock -- a python socket instance
-* thandle -- transaction handle
-* tokey -- confdValue list
-* path -- the subtree rooted at path is copied
-
-### move\_ordered
-
-```python
-move_ordered(sock, thandle, where, tokey, path) -> None
-```
-
-Moves an entry in an 'ordered-by user' statement to a new position.
-
-Keyword arguments:
-
-* sock -- a python socket instance
-* thandle -- transaction handle
-* where -- FIRST, LAST, BEFORE or AFTER
-* tokey -- confdValue list
-* path -- the subtree rooted at path is copied
-
-### netconf\_ssh\_call\_home
-
-```python
-netconf_ssh_call_home(sock, host, port) -> None
-```
-
-Initiates a NETCONF SSH Call Home connection.
-
-Keyword arguments:
-
-sock -- a python socket instance host -- an ipv4 addres, ipv6 address, or host name port -- the port to connect to
-
-### netconf\_ssh\_call\_home\_opaque
-
-```python
-netconf_ssh_call_home_opaque(sock, host, opaque, port) -> None
-```
-
-Initiates a NETCONF SSH Call Home connection.
-
-Keyword arguments: sock -- a python socket instance host -- an ipv4 addres, ipv6 address, or host name opaque -- opaque string passed to an external call home session port -- the port to connect to
-
-### num\_instances
-
-```python
-num_instances(sock, thandle, path) -> int
-```
-
-Return the number of instances in a list in the tree.
-
-Keyword arguments:
-
-* sock -- a python socket instance
-* thandle -- transaction handle
-* path -- position to check
-
-### perform\_upgrade
-
-```python
-perform_upgrade(sock, loadpathdirs) -> None
-```
-
-Second step in an upgrade. Loads new data model files.
-
-Keyword arguments:
-
-* sock -- a python socket instance
-* loadpathdirs -- list of directories that are searched for CDB 'init' files
-
-### popd
-
-```python
-popd(sock, thandle) -> None
-```
-
-Return to earlier saved (pushd) position in the tree.
-
-Keyword arguments:
-
-* sock -- a python socket instance
-* thandle -- transaction handle
-
-### prepare\_trans
-
-```python
-prepare_trans(sock, thandle) -> None
-```
-
-First phase of a two-phase trans.
-
-Keyword arguments:
-
-* sock -- a python socket instance
-* thandle -- transaction handle
-
-### prepare\_trans\_flags
-
-```python
-prepare_trans_flags(sock, thandle, flags) -> None
-```
-
-First phase of a two-phase trans with flags.
-
-Keyword arguments:
-
-* sock -- a python socket instance
-* thandle -- transaction handle
-* flags -- flags to set in the transaction
-
-### prio\_message
-
-```python
-prio_message(sock, to, message) -> None
-```
-
-Like sys\_message but will be output directly instead of delivered when the receiver terminates any ongoing command.
-
-Keyword arguments:
-
-* sock -- a python socket instance
-* to -- user to send message to or 'all' to send to all users
-* message -- the message
-
-### progress\_info
-
-```python
-progress_info(sock, msg, verbosity, attrs, links, path) -> None
-```
-
-While spans represents a pair of data points: start and stop; info events are instead singular events, one point in time. Call progress\_info() to write a progress span info event to the progress trace. The info event will have the same span-id as the start and stop events of the currently ongoing progress span in the active user session or transaction. See start\_progress\_span() for more information.
-
-Keyword arguments:
-
-* sock -- a python socket instance
-* msg -- message to report
-* verbosity -- VERBOSITY\_\*, default: VERBOSITY\_NORMAL (optional)
-* attrs -- user defined attributes (dict)
-* links -- to existing traces or spans \[{'trace\_id':'...', 'span\_id':'...'}]
-* path -- keypath to an action/leaf/service
-
-### progress\_info\_th
-
-```python
-progress_info_th(sock, thandle, msg, verbosity, attrs, links, path) ->
- None
-```
-
-While spans represents a pair of data points: start and stop; info events are instead singular events, one point in time. Call progress\_info() to write a progress span info event to the progress trace. The info event will have the same span-id as the start and stop events of the currently ongoing progress span in the active user session or transaction. See start\_progress\_span() for more information.
-
-Keyword arguments:
-
-* sock -- a python socket instance
-* thandle -- transaction handle
-* msg -- message to report
-* verbosity -- VERBOSITY\_\*, default: VERBOSITY\_NORMAL (optional)
-* attrs -- user defined attributes (dict)
-* links -- to existing traces or spans \[{'trace\_id':'...', 'span\_id':'...'}]
-* path -- keypath to an action/leaf/service
-
-### pushd
-
-```python
-pushd(sock, thandle, path) -> None
-```
-
-Like cd, but saves the previous position in the tree. This can later be used by popd to return.
-
-Keyword arguments:
-
-* sock -- a python socket instance
-* thandle -- transaction handle
-* path -- position to change to
-
-### query\_free\_result
-
-```python
-query_free_result(qrs) -> None
-```
-
-Deallocates the struct returned by 'query\_result()'.
-
-Keyword arguments:
-
-* qrs -- the query result structure to free
-
-### query\_reset
-
-```python
-query_reset(sock, qh) -> None
-```
-
-Reset the query to the beginning again.
-
-Keyword arguments:
-
-* sock -- a python socket instance
-* qh -- query handle
-
-### query\_reset\_to
-
-```python
-query_reset_to(sock, qh, offset) -> None
-```
-
-Reset the query to offset.
-
-Keyword arguments:
-
-* sock -- a python socket instance
-* qh -- query handle
-* offset -- offset counted from the beginning
-
-### query\_result
-
-```python
-query_result(sock, qh) -> _ncs.QueryResult
-```
-
-Fetches the next available chunk of results associated with query handle qh.
-
-Keyword arguments:
-
-* sock -- a python socket instance
-* qh -- query handle
-
-### query\_result\_count
-
-```python
-query_result_count(sock, qh) -> int
-```
-
-Counts the number of query results
-
-Keyword arguments:
-
-* sock -- a python socket instance
-* qh -- query handle
-
-### query\_start
-
-```python
-query_start(sock, thandle, expr, context_node, chunk_size, initial_offset,
- result_as, select, sort) -> int
-```
-
-Starts a new query attached to the transaction given in 'th'. Returns a query handle.
-
-Keyword arguments:
-
-* sock -- a python socket instance
-* thandle -- transaction handle
-* expr -- the XPath Path expression to evaluate
-* context\_node -- The context node (an ikeypath) for the primary expression, or None (which means that the context node will be /).
-* chunk\_size -- How many results to return at a time. If set to 0, a default number will be used.
-* initial\_offset -- Which result in line to begin with (1 means to start from the beginning).
-* result\_as -- The format the results will be returned in.
-* select -- An array of XPath 'select' expressions.
-* sort -- An array of XPath expressions which will be used for sorting
-
-### query\_stop
-
-```python
-query_stop(sock, qh) -> None
-```
-
-Stop the running query.
-
-Keyword arguments:
-
-* sock -- a python socket instance
-* qh -- query handle
-
-### rebind\_listener
-
-```python
-rebind_listener(sock, listener) -> None
-```
-
-Request that the subsystems specified by 'listeners' rebinds its listener socket(s).
-
-Keyword arguments:
-
-* sock -- a python socket instance
-* listener -- One of the following parameters (ORed together if more than one)
-
- ```
- LISTENER_IPC
- LISTENER_NETCONF
- LISTENER_SNMP
- LISTENER_CLI
- LISTENER_WEBUI
- ```
-
-### reload\_config
-
-```python
-reload_config(sock) -> None
-```
-
-Request that the system reloads its configuration files.
-
-Keyword arguments:
-
-* sock -- a python socket instance
-
-### reopen\_logs
-
-```python
-reopen_logs(sock) -> None
-```
-
-Request that the system closes and re-opens its log files.
-
-Keyword arguments:
-
-* sock -- a python socket instance
-
-### report\_progress
-
-```python
-report_progress(sock, verbosity, msg) -> None
-```
-
-Report progress events.
-
-This function makes it possible to report transaction/action progress from user code.
-
-This function is deprecated and will be removed in a future release. Use progress\_info() instead.
-
-Keyword arguments:
-
-* sock -- a python socket instance
-* thandle -- transaction handle
-* verbosity -- at which verbosity level the message should be reported
-* msg -- message to report
-
-### report\_progress2
-
-```python
-report_progress2(sock, verbosity, msg, package) -> None
-```
-
-Report progress events.
-
-This function makes it possible to report transaction/action progress from user code.
-
-This function is deprecated and will be removed in a future release. Use progress\_info() instead.
-
-Keyword arguments:
-
-* sock -- a python socket instance
-* thandle -- transaction handle
-* verbosity -- at which verbosity level the message should be reported
-* msg -- message to report
-* package -- from what package the message is reported
-
-### report\_progress\_start
-
-```python
-report_progress_start(sock, verbosity, msg, package) -> int
-```
-
-Report progress events. Used for calculation of the duration between two events.
-
-This function makes it possible to report transaction/action progress from user code.
-
-This function is deprecated and will be removed in a future release. Use start\_progress\_span() instead.
-
-Keyword arguments:
-
-* sock -- a python socket instance
-* thandle -- transaction handle
-* verbosity -- at which verbosity level the message should be reported
-* msg -- message to report
-* package -- from what package the message is reported (only NCS)
-
-### report\_progress\_stop
-
-```python
-report_progress_stop(sock, verbosity, msg, annotation,
- package, timestamp) -> int
-```
-
-Report progress events. Used for calculation of the duration between two events.
-
-This function makes it possible to report transaction/action progress from user code.
-
-This function is deprecated and will be removed in a future release. Use end\_progress\_span() instead.
-
-Keyword arguments:
-
-* sock -- a python socket instance
-* thandle -- transaction handle
-* verbosity -- at which verbosity level the message should be reported
-* msg -- message to report
-* annotation -- metadata about the event, indicating error, explains latency or shows result etc
-* package -- from what package the message is reported (only NCS)
-* timestamp -- start of the event
-
-### report\_service\_progress
-
-```python
-report_service_progress(sock, verbosity, msg, path) -> None
-```
-
-Report progress events for a service.
-
-This function makes it possible to report transaction progress from FASTMAP code.
-
-This function is deprecated and will be removed in a future release. Use progress\_info() instead.
-
-Keyword arguments:
-
-* sock -- a python socket instance
-* thandle -- transaction handle
-* verbosity -- at which verbosity level the message should be reported
-* msg -- message to report
-* path -- service instance path
-
-### report\_service\_progress2
-
-```python
-report_service_progress2(sock, verbosity, msg, package, path) -> None
-```
-
-Report progress events for a service.
-
-This function makes it possible to report transaction progress from FASTMAP code.
-
-This function is deprecated and will be removed in a future release. Use progress\_info() instead.
-
-Keyword arguments:
-
-* sock -- a python socket instance
-* thandle -- transaction handle
-* verbosity -- at which verbosity level the message should be reported
-* msg -- message to report
-* package -- from what package the message is reported
-* path -- service instance path
-
-### report\_service\_progress\_start
-
-```python
-report_service_progress_start(sock, verbosity, msg, package, path) -> int
-```
-
-Report progress events for a service. Used for calculation of the duration between two events.
-
-This function makes it possible to report transaction progress from FASTMAP code.
-
-This function is deprecated and will be removed in a future release. Use start\_progress\_span() instead.
-
-Keyword arguments:
-
-* sock -- a python socket instance
-* thandle -- transaction handle
-* verbosity -- at which verbosity level the message should be reported
-* msg -- message to report
-* package -- from what package the message is reported
-* path -- service instance path
-
-### report\_service\_progress\_stop
-
-```python
-report_service_progress_stop(sock, verbosity, msg, annotation,
- package, path) -> None
-```
-
-Report progress events for a service. Used for calculation of the duration between two events.
-
-This function makes it possible to report transaction progress from FASTMAP code.
-
-This function is deprecated and will be removed in a future release. Use end\_progress\_span() instead.
-
-Keyword arguments:
-
-* sock -- a python socket instance
-* thandle -- transaction handle
-* verbosity -- at which verbosity level the message should be reported
-* msg -- message to report
-* annotation -- metadata about the event, indicating error, explains latency or shows result etc
-* package -- from what package the message is reported
-* path -- service instance path
-* timestamp -- start of the event
-
-### request\_action
-
-```python
-request_action(sock, params, hashed_ns, path) -> list
-```
-
-Invoke an action defined in the data model. Returns a list oftagValues.
-
-Keyword arguments:
-
-* sock -- a python socket instance
-* params -- tagValue parameters for the action
-* hashed\_ns -- namespace
-* path -- path to action
-
-### request\_action\_str\_th
-
-```python
-request_action_str_th(sock, thandle, cmd, path) -> str
-```
-
-The same as request\_action\_th but takes the parameters as a string and returns the result as a string.
-
-Keyword arguments:
-
-* sock -- a python socket instance
-* thandle -- transaction handle
-* cmd -- string parameters
-* path -- path to action
-
-### request\_action\_th
-
-```python
-request_action_th(sock, thandle, params, path) -> list
-```
-
-Same as for request\_action() but uses the current namespace.
-
-Keyword arguments:
-
-* sock -- a python socket instance
-* thandle -- transaction handle
-* params -- tagValue parameters for the action
-* path -- path to action
-
-### revert
-
-```python
-revert(sock, thandle) -> None
-```
-
-Removes all changes done to the transaction.
-
-Keyword arguments:
-
-* sock -- a python socket instance
-* thandle -- transaction handle
-
-### roll\_config
-
-```python
-roll_config(sock, thandle, path) -> int
-```
-
-This function can be used to save the equivalent of a rollback file for a given configuration before it is committed (or a subtree thereof) in curly bracket format. Returns an id
-
-Keyword arguments:
-
-* sock -- a python socket instance
-* thandle -- transaction handle
-* path -- tree for which to save the rollback configuration
-
-### roll\_config\_result
-
-```python
-roll_config_result(sock, id) -> int
-```
-
-We use this function to assert that we received the entire rollback configuration over a stream socket.
-
-Keyword arguments:
-
-* sock -- a python socket instance
-* id -- the id returned from roll\_config()
-
-### save\_config
-
-```python
-save_config(sock, thandle, flags, path) -> int
-```
-
-Save the config, returns an id. The flags parameter controls the saving as follows. The value is a bitmask.
-
-```
- CONFIG_XML -- The configuration format is XML.
- CONFIG_XML_PRETTY -- The configuration format is pretty printed XML.
- CONFIG_JSON -- The configuration is in JSON format.
- CONFIG_J -- The configuration is in curly bracket Juniper CLI
- format.
- CONFIG_C -- The configuration is in Cisco XR style format.
- CONFIG_TURBO_C -- The configuration is in Cisco XR style format.
- A faster parser than the normal CLI will be used.
- CONFIG_C_IOS -- The configuration is in Cisco IOS style format.
- CONFIG_XPATH -- The path gives an XPath filter instead of a
- keypath. Can only be used with CONFIG_XML and
- CONFIG_XML_PRETTY.
- CONFIG_WITH_DEFAULTS -- Default values are part of the
- configuration dump.
- CONFIG_SHOW_DEFAULTS -- Default values are also shown next to
- the real configuration value. Applies only to the CLI formats.
- CONFIG_WITH_OPER -- Include operational data in the dump.
- CONFIG_HIDE_ALL -- Hide all hidden nodes.
- CONFIG_UNHIDE_ALL -- Unhide all hidden nodes.
- CONFIG_WITH_SERVICE_META -- Include NCS service-meta-data
- attributes(refcounter, backpointer, out-of-band and
- original-value) in the dump.
- CONFIG_NO_PARENTS -- When a path is provided its parent nodes are by
- default included. With this option the output will begin
- immediately at path - skipping any parents.
- CONFIG_OPER_ONLY -- Include only operational data, and ancestors to
- operational data nodes, in the dump.
- CONFIG_NO_BACKQUOTE -- This option can only be used together with
- CONFIG_C and CONFIG_C_IOS. When set backslash will not be quoted
- in strings.
- CONFIG_CDB_ONLY -- Include only data stored in CDB in the dump. By
- default only configuration data is included, but the flag can be
- combined with either CONFIG_WITH_OPER or CONFIG_OPER_ONLY to
- save both configuration and operational data, or only
- operational data, respectively.
-```
-
-Keyword arguments:
-
-* sock -- a python socket instance
-* thandle -- transaction handle
-* flags -- as above
-* path -- save only configuration below path
-
-### save\_config\_result
-
-```python
-save_config_result(sock, id) -> None
-```
-
-Verify that we received the entire configuration over the stream socket.
-
-Keyword arguments:
-
-* sock -- a python socket instance
-* id -- the id returned from save\_config
-
-### set\_attr
-
-```python
-set_attr(sock, thandle, attr, v, keypath) -> None
-```
-
-Set attributes for a node.
-
-Keyword arguments:
-
-* sock -- a python socket instance
-* thandle -- transaction handle
-* attr -- attributes to set
-* v -- value to set the attribute to
-* keypath -- path to choice
-
-### set\_comment
-
-```python
-set_comment(sock, thandle, comment) -> None
-```
-
-Set the Comment that is stored in the rollback file when a transaction towards running is committed.
-
-Keyword arguments:
-
-* sock -- a python socket instance
-* thandle -- transaction handle
-* comment -- the Comment
-
-### set\_delayed\_when
-
-```python
-set_delayed_when(sock, thandle, on) -> None
-```
-
-This function enables (on non-zero) or disables (on == 0) the 'delayed when' mode of a transaction.
-
-Keyword arguments:
-
-* sock -- a python socket instance
-* thandle -- transaction handle
-* on -- disables when on=0, enables for all other n
-
-### set\_elem
-
-```python
-set_elem(sock, thandle, v, path) -> None
-```
-
-Set element to confdValue.
-
-Keyword arguments:
-
-* sock -- a python socket instance
-* thandle -- transaction handle
-* v -- confdValue
-* path -- position of elem
-
-### set\_elem2
-
-```python
-set_elem2(sock, thandle, strval, path) -> None
-```
-
-Set element to string.
-
-Keyword arguments:
-
-* sock -- a python socket instance
-* thandle -- transaction handle
-* strval -- confdValue
-* path -- position of elem
-
-### set\_flags
-
-```python
-set_flags(sock, thandle, flags) -> None
-```
-
-Modify read/write session aspect. See MAAPI\_FLAG\_xyz.
-
-Keyword arguments:
-
-* sock -- a python socket instance
-* thandle -- transaction handle
-* flags -- flags to set
-
-### set\_label
-
-```python
-set_label(sock, thandle, label) -> None
-```
-
-Set the Label that is stored in the rollback file when a transaction towards running is committed.
-
-Keyword arguments:
-
-* sock -- a python socket instance
-* thandle -- transaction handle
-* label -- the Label
-
-### set\_namespace
-
-```python
-set_namespace(sock, thandle, hashed_ns) -> None
-```
-
-Indicate which namespace to use in case of ambiguities.
-
-Keyword arguments:
-
-* sock -- a python socket instance
-* thandle -- transaction handle
-* hashed\_ns -- the namespace to use
-
-### set\_next\_user\_session\_id
-
-```python
-set_next_user_session_id(sock, usessid) -> None
-```
-
-Set the user session id that will be assigned to the next user session started. The given value is silently forced to be in the range 100 .. 2^31-1. This function can be used to ensure that session ids for user sessions started by northbound agents or via MAAPI are unique across a restart.
-
-Keyword arguments:
-
-* sock -- a python socket instance
-* usessid -- user session id
-
-### set\_object
-
-```python
-set_object(sock, thandle, values, keypath) -> None
-```
-
-Set leafs at path to object.
-
-Keyword arguments:
-
-* sock -- a python socket instance
-* thandle -- transaction handle
-* values -- list of values
-* keypath -- path to set
-
-### set\_readonly\_mode
-
-```python
-set_readonly_mode(sock, flag) -> None
-```
-
-Control if northbound agents should be able to write or not.
-
-Keyword arguments:
-
-* sock -- a python socket instance
-* flag -- non-zero means read-only mode
-
-### set\_running\_db\_status
-
-```python
-set_running_db_status(sock, status) -> None
-```
-
-Sets the notion of consistent state of the running db.
-
-Keyword arguments:
-
-* sock -- a python socket instance
-* status -- integer status to set
-
-### set\_user\_session
-
-```python
-set_user_session(sock, usessid) -> None
-```
-
-Associate a socket with an already existing user session.
-
-Keyword arguments:
-
-* sock -- a python socket instance
-* usessid -- user session id
-
-### set\_values
-
-```python
-set_values(sock, thandle, values, keypath) -> None
-```
-
-Set leafs at path to values.
-
-Keyword arguments:
-
-* sock -- a python socket instance
-* thandle -- transaction handle
-* values -- list of tagValues
-* keypath -- path to set
-
-### shared\_apply\_template
-
-```python
-shared_apply_template(sock, thandle, template, variables,flags, rootpath) -> None
-```
-
-FASTMAP version of ncs\_apply\_template.
-
-Keyword arguments:
-
-* sock -- a python socket instance
-* thandle -- transaction handle
-* template -- template name
-* variables -- None or a list of variables in the form of tuples
-* flags -- Must be set as 0
-* rootpath -- in what context to apply the template
-
-### shared\_copy\_tree
-
-```python
-shared_copy_tree(sock, thandle, flags, frompath, topath) -> None
-```
-
-FASTMAP version of copy\_tree.
-
-Keyword arguments:
-
-* sock -- a python socket instance
-* thandle -- transaction handle
-* flags -- Must be set as 0
-* frompath -- the path to copy the tree from
-* topath -- the path to copy the tree to
-
-### shared\_create
-
-```python
-shared_create(sock, thandle, flags, path) -> None
-```
-
-FASTMAP version of create.
-
-Keyword arguments:
-
-* sock -- a python socket instance
-* thandle -- transaction handle
-* flags -- Must be set as 0
-
-### shared\_insert
-
-```python
-shared_insert(sock, thandle, flags, path) -> None
-```
-
-FASTMAP version of insert.
-
-Keyword arguments:
-
-* sock -- a python socket instance
-* thandle -- transaction handle
-* flags -- Must be set as 0
-* path -- the path to the list to insert a new entry into
-
-### shared\_set\_elem
-
-```python
-shared_set_elem(sock, thandle, v, flags, path) -> None
-```
-
-FASTMAP version of set\_elem.
-
-Keyword arguments:
-
-* sock -- a python socket instance
-* thandle -- transaction handle
-* v -- the value to set
-* flags -- should be 0
-* path -- the path to the element to set
-
-### shared\_set\_elem2
-
-```python
-shared_set_elem2(sock, thandle, strval, flags, path) -> None
-```
-
-FASTMAP version of set\_elem2.
-
-Keyword arguments:
-
-* sock -- a python socket instance
-* thandle -- transaction handle
-* strval -- the value to se
-* flags -- should be 0
-* path -- the path to the element to set
-
-### shared\_set\_values
-
-```python
-shared_set_values(sock, thandle, values, flags, keypath) -> None
-```
-
-FASTMAP version of set\_values.
-
-Keyword arguments:
-
-* sock -- a python socket instance
-* thandle -- transaction handle
-* values -- list of tagValues
-* flags -- should be 0
-* keypath -- path to set
-
-### snmpa\_reload
-
-```python
-snmpa_reload(sock, synchronous) -> None
-```
-
-Start a reload of SNMP Agent config from external data provider.
-
-Used by external data provider to notify that there is a change to the SNMP Agent config data. Calling the function with the argument 'synchronous' set to 1 or True means that the call will block until the loading is completed.
-
-Keyword arguments:
-
-* sock -- a python socket instance
-* synchronous -- if 1, will wait for the loading complete and return when the loading is complete; if 0, will only initiate the loading and return immediately
-
-### start\_phase
-
-```python
-start_phase(sock, phase, synchronous) -> None
-```
-
-When the system has been started in phase0, this function tells the system to proceed to start phase 1 or 2.
-
-Keyword arguments:
-
-* sock -- a python socket instance
-* phase -- phase to start, 1 or 2
-* synchronous -- if 1, will wait for the loading complete and return when the loading is complete; if 0, will only initiate the loading of AAA data and return immediately
-
-### start\_progress\_span
-
-```python
-start_progress_span(sock, msg, verbosity, attrs, links, path) -> dict
-```
-
-Starts a progress span. Progress spans are trace messages written to the progress trace and the developer log. A progress span consists of a start and a stop event which can be used to calculate the duration between the two. Those events can be identified with unique span-ids. Inside the span it is possible to start new spans, which will then become child spans, the parent-span-id is set to the previous spans' span-id. A child span can be used to calculate the duration of a sub task, and is started from consecutive maapi\_start\_progress\_span() calls, and is ended with maapi\_end\_progress\_span().
-
-The concepts of traces, trace-id and spans are highly influenced by https://opentelemetry.io/docs/concepts/signals/traces/#spans
-
-Keyword arguments:
-
-* sock -- a python socket instance
-* msg -- message to report
-* verbosity -- VERBOSITY\_\*, default: VERBOSITY\_NORMAL (optional)
-* attrs -- user defined attributes (dict)
-* links -- to existing traces or spans \[{'trace\_id':'...', 'span\_id':'...'}]
-* path -- keypath to an action/leaf/service
-
-### start\_progress\_span\_th
-
-```python
-start_progress_span_th(sock, thandle, msg, verbosity,
- attrs, links, path) -> dict
-```
-
-Starts a progress span. Progress spans are trace messages written to the progress trace and the developer log. A progress span consists of a start and a stop event which can be used to calculate the duration between the two. Those events can be identified with unique span-ids. Inside the span it is possible to start new spans, which will then become child spans, the parent-span-id is set to the previous spans' span-id. A child span can be used to calculate the duration of a sub task, and is started from consecutive maapi\_start\_progress\_span() calls, and is ended with maapi\_end\_progress\_span().
-
-The concepts of traces, trace-id and spans are highly influenced by https://opentelemetry.io/docs/concepts/signals/traces/#spans
-
-Keyword arguments:
-
-* sock -- a python socket instance
-* thandle -- transaction handle
-* msg -- message to report
-* verbosity -- VERBOSITY\_\*, default: VERBOSITY\_NORMAL (optional)
-* attrs -- user defined attributes (dict)
-* links -- to existing traces or spans \[{'trace\_id':'...', 'span\_id':'...'}]
-* path -- keypath to an action/leaf/service
-
-### start\_trans
-
-```python
-start_trans(sock, name, readwrite) -> int
-```
-
-Creates a new transaction towards the data store specified by name, which can be one of CONFD\_CANDIDATE, CONFD\_RUNNING, or CONFD\_STARTUP (however updating the startup data store is better done via maapi\_copy\_running\_to\_startup()). The readwrite parameter can be either CONFD\_READ, to start a readonly transaction, or CONFD\_READ\_WRITE, to start a read-write transaction. The function returns the transaction id.
-
-Keyword arguments:
-
-* sock -- a python socket instance
-* name -- name of the database
-* readwrite -- CONFD\_READ or CONFD\_WRITE
-
-### start\_trans2
-
-```python
-start_trans2(sock, name, readwrite, usid) -> int
-```
-
-Start a transaction within an existing user session, returns the transaction id.
-
-Keyword arguments:
-
-* sock -- a python socket instance
-* name -- name of the database
-* readwrite -- CONFD\_READ or CONFD\_WRITE
-* usid -- user session id
-
-### start\_trans\_flags
-
-```python
-start_trans_flags(sock, name, readwrite, usid) -> int
-```
-
-The same as start\_trans2, but can also set the same flags that 'set\_flags' can set.
-
-Keyword arguments:
-
-* sock -- a python socket instance
-* name -- name of the database
-* readwrite -- CONFD\_READ or CONFD\_WRITE
-* usid -- user session id
-* flags -- same as for 'set\_flags'
-
-### start\_trans\_flags2
-
-```python
-start_trans_flags2(sock, name, readwrite, usid, vendor, product, version,
- client_id) -> int
-```
-
-This function does the same as start\_trans\_flags() but allows for additional information to be passed to ConfD/NCS.
-
-Keyword arguments:
-
-* sock -- a python socket instance
-* name -- name of the database
-* readwrite -- CONFD\_READ or CONFD\_WRITE
-* usid -- user session id
-* flags -- same as for 'set\_flags'
-* vendor -- vendor string (may be None)
-* product -- product string (may be None)
-* version -- version string (may be None)
-* client\_id -- client identification string (may be None)
-
-### start\_trans\_in\_trans
-
-```python
-start_trans_in_trans(sock, readwrite, usid, thandle) -> int
-```
-
-Start a transaction within an existing transaction, using the started transaction as backend instead of an actual data store. Returns the transaction id as an integer.
-
-Keyword arguments:
-
-* sock -- a python socket instance
-* readwrite -- CONFD\_READ or CONFD\_WRITE
-* usid -- user session id
-* thandle -- identifies the backend transaction to use
-
-### start\_user\_session
-
-```python
-start_user_session(sock, username, context, groups, src_addr, prot) -> None
-```
-
-Establish a user session on the socket.
-
-Keyword arguments:
-
-* sock -- a python socket instance
-* username -- the user for the session
-* context -- context for the session
-* groups -- groups
-* src-addr -- src address of e.g. the client connecting
-* prot -- the protocol used by the client for connecting
-
-### start\_user\_session2
-
-```python
-start_user_session2(sock, username, context, groups, src_addr, src_port, prot) -> None
-```
-
-Establish a user session on the socket.
-
-Keyword arguments:
-
-* sock -- a python socket instance
-* username -- the user for the session
-* context -- context for the session
-* groups -- groups
-* src-addr -- src address of e.g. the client connecting
-* src-port -- src port of e.g. the client connecting
-* prot -- the protocol used by the client for connecting
-
-### start\_user\_session3
-
-```python
-start_user_session3(sock, username, context, groups, src_addr, src_port, prot, vendor, product, version, client_id) -> None
-```
-
-Establish a user session on the socket.
-
-This function does the same as start\_user\_session2() but allows for additional information to be passed to ConfD/NCS.
-
-Keyword arguments:
-
-* sock -- a python socket instance
-* username -- the user for the session
-* context -- context for the session
-* groups -- groups
-* src-addr -- src address of e.g. the client connecting
-* src-port -- src port of e.g. the client connecting
-* prot -- the protocol used by the client for connecting
-* vendor -- vendor string (may be None)
-* product -- product string (may be None)
-* version -- version string (may be None)
-* client\_id -- client identification string (may be None)
-
-### start\_user\_session\_gen
-
-```python
-start_user_session_gen(sock, username, context, groups, vendor, product, version, client_id) -> None
-```
-
-Establish a user session on the socket.
-
-This function does the same as start\_user\_session3() but it takes the source address of the supplied socket from the OS.
-
-Keyword arguments:
-
-* sock -- a python socket instance
-* username -- the user for the session
-* context -- context for the session
-* groups -- groups
-* vendor -- vendor string (may be None)
-* product -- product string (may be None)
-* version -- version string (may be None)
-* client\_id -- client identification string (may be None)
-
-### stop
-
-```python
-stop(sock) -> None
-```
-
-Request that the system stops.
-
-Keyword arguments:
-
-* sock -- a python socket instance
-
-### sys\_message
-
-```python
-sys_message(sock, to, message) -> None
-```
-
-Send a message to a specific user, a specific session or all user depending on the 'to' parameter. 'all', or can be used.
-
-Keyword arguments:
-
-* sock -- a python socket instance
-* to -- user to send message to or 'all' to send to all users
-* message -- the message
-
-### unhide\_group
-
-```python
-unhide_group(sock, thandle, group_name) -> None
-```
-
-Unhide all nodes belonging to a hide group in a transaction that started with flag FLAG\_HIDE\_ALL\_HIDEGROUPS.
-
-Keyword arguments:
-
-* sock -- a python socket instance
-* thandle -- transaction handle
-* group\_name -- the group name
-
-### unlock
-
-```python
-unlock(sock, name) -> None
-```
-
-Unlock database with name.
-
-Keyword arguments:
-
-* sock -- a python socket instance
-* name -- name of the database to unlock
-
-### unlock\_partial
-
-```python
-unlock_partial(sock, lockid) -> None
-```
-
-Unlock a subset of a database which is locked by lockid.
-
-Keyword arguments:
-
-* sock -- a python socket instance
-* lockid -- id of the lock
-
-### user\_message
-
-```python
-user_message(sock, to, message, sender) -> None
-```
-
-Send a message to a specific user.
-
-Keyword arguments:
-
-* sock -- a python socket instance
-* to -- user to send message to or 'all' to send to all users
-* message -- the message
-* sender -- send as
-
-### validate\_trans
-
-```python
-validate_trans(sock, thandle, unlock, forcevalidation) -> None
-```
-
-Validates all data written in a transaction.
-
-If unlock is 1 (or True), the transaction is open for further editing even if validation succeeds. If unlock is 0 (or False) and the function returns CONFD\_OK, the next function to be called MUST be maapi\_prepare\_trans() or maapi\_finish\_trans().
-
-unlock = 1 can be used to implement a 'validate' command which can be given in the middle of an editing session. The first thing that happens is that a lock is set. If unlock == 1, the lock is released on success. The lock is always released on failure.
-
-The forcevalidation argument should normally be 0 (or False). It has no effect for a transaction towards the running or startup data stores, validation is always performed. For a transaction towards the candidate data store, validation will not be done unless forcevalidation is non-zero.
-
-Keyword arguments:
-
-* sock -- a python socket instance
-* thandle -- transaction handle
-* unlock -- int or bool
-* forcevalidation -- int or bool
-
-### wait\_start
-
-```python
-wait_start(sock, phase) -> None
-```
-
-Wait for the system to reach a certain start phase (0,1 or 2).
-
-Keyword arguments:
-
-* sock -- a python socket instance
-* phase -- phase to wait for, 0, 1 or 2
-
-### write\_service\_log\_entry
-
-```python
-write_service_log_entry(sock, path, msg, type, level) -> None
-```
-
-Write service log entries.
-
-This function makes it possible to write service log entries from FASTMAP code.
-
-Keyword arguments:
-
-* sock -- a python socket instance
-* path -- service instance path
-* msg -- message to log
-* type -- log entry type
-* level -- log entry level
-
-### xpath2kpath
-
-```python
-xpath2kpath(sock, xpath) -> _ncs.HKeypathRef
-```
-
-Convert an xpath to a hashed keypath.
-
-Keyword arguments:
-
-* sock -- a python socket instance
-* xpath -- to convert
-
-### xpath2kpath\_th
-
-```python
-xpath2kpath_th(sock, thandle, xpath) -> _ncs.HKeypathRef
-```
-
-Convert an xpath to a hashed keypath.
-
-Keyword arguments:
-
-* sock -- a python socket instance
-* thandle -- transaction handle
-* xpath -- to convert
-
-### xpath\_eval
-
-```python
-xpath_eval(sock, thandle, expr, result, trace, path) -> None
-```
-
-Evaluate the xpath expression in 'expr'. For each node in the resulting node the function 'result' is called with the keypath to the resulting node as the first argument and, if the node is a leaf and has a value. the value of that node as the second argument. For each invocation of 'result' the function should return ITER\_CONTINUE to tell the XPath evaluator to continue or ITER\_STOP to stop the evaluation. A trace function, 'pytrace', could be supplied and will be called with a single string as an argument. 'None' can be used if no trace is needed. Unless a 'path' is given the root node will be used as a context for the evaluations.
-
-Keyword arguments:
-
-* sock -- a python socket instance
-* thandle -- transaction handle
-* expr -- the XPath Path expression to evaluate
-* result -- the result function
-* trace -- a trace function that takes a string as a parameter
-* path -- the context node
-
-### xpath\_eval\_expr
-
-```python
-xpath_eval_expr(sock, thandle, expr, trace, path) -> str
-```
-
-Like xpath\_eval but returns a string.
-
-Keyword arguments:
-
-* sock -- a python socket instance
-* thandle -- transaction handle
-* expr -- the XPath Path expression to evaluate
-* trace -- a trace function that takes a string as a parameter
-* path -- the context node
-
-## Classes
-
-### _class_ **Cursor**
-
-struct maapi\_cursor object
-
-Members:
-
-_None_
-
-## Predefined Values
-
-```python
-
-CMD_KEEP_PIPE = 8
-CMD_NO_AAA = 4
-CMD_NO_FULLPATH = 1
-CMD_NO_HIDDEN = 2
-COMMIT_NCS_ASYNC_COMMIT_QUEUE = 256
-COMMIT_NCS_BYPASS_COMMIT_QUEUE = 64
-COMMIT_NCS_CONFIRM_NETWORK_STATE = 268435456
-COMMIT_NCS_CONFIRM_NETWORK_STATE_RE_EVALUATE_POLICIES = 536870912
-COMMIT_NCS_NO_DEPLOY = 8
-COMMIT_NCS_NO_FASTMAP = 8
-COMMIT_NCS_NO_LSA = 1048576
-COMMIT_NCS_NO_NETWORKING = 16
-COMMIT_NCS_NO_OUT_OF_SYNC_CHECK = 32
-COMMIT_NCS_NO_OVERWRITE = 1024
-COMMIT_NCS_NO_REVISION_DROP = 4
-COMMIT_NCS_RECONCILE_ATTACH_NON_SERVICE_CONFIG = 67108864
-COMMIT_NCS_RECONCILE_DETACH_NON_SERVICE_CONFIG = 134217728
-COMMIT_NCS_RECONCILE_DISCARD_NON_SERVICE_CONFIG = 33554432
-COMMIT_NCS_RECONCILE_KEEP_NON_SERVICE_CONFIG = 16777216
-COMMIT_NCS_SYNC_COMMIT_QUEUE = 512
-COMMIT_NCS_USE_LSA = 524288
-CONFIG_AUTOCOMMIT = 8192
-CONFIG_C = 4
-CONFIG_CDB_ONLY = 4194304
-CONFIG_CONTINUE_ON_ERROR = 16384
-CONFIG_C_IOS = 32
-CONFIG_HIDE_ALL = 2048
-CONFIG_J = 2
-CONFIG_JSON = 131072
-CONFIG_MERGE = 64
-CONFIG_NO_BACKQUOTE = 2097152
-CONFIG_NO_PARENTS = 524288
-CONFIG_OPER_ONLY = 1048576
-CONFIG_READ_WRITE_ACCESS_ONLY = 33554432
-CONFIG_REPLACE = 1024
-CONFIG_SHOW_DEFAULTS = 16
-CONFIG_SUPPRESS_ERRORS = 32768
-CONFIG_TURBO_C = 8388608
-CONFIG_UNHIDE_ALL = 4096
-CONFIG_WITH_DEFAULTS = 8
-CONFIG_WITH_OPER = 128
-CONFIG_WITH_SERVICE_META = 262144
-CONFIG_XML = 1
-CONFIG_XML_LOAD_LAX = 65536
-CONFIG_XML_PRETTY = 512
-CONFIG_XPATH = 256
-DEL_ALL = 2
-DEL_EXPORTED = 3
-DEL_SAFE = 1
-ECHO = 1
-FLAG_CONFIG_CACHE_ONLY = 32
-FLAG_CONFIG_ONLY = 4
-FLAG_DELAYED_WHEN = 64
-FLAG_DELETE = 2
-FLAG_EMIT_PARENTS = 1
-FLAG_HIDE_ALL_HIDEGROUPS = 256
-FLAG_HIDE_INACTIVE = 8
-FLAG_HINT_BULK = 1
-FLAG_NON_RECURSIVE = 4
-FLAG_NO_CONFIG_CACHE = 16
-FLAG_NO_DEFAULTS = 2
-FLAG_SKIP_SUBSCRIBERS = 512
-MOVE_AFTER = 3
-MOVE_BEFORE = 2
-MOVE_FIRST = 1
-MOVE_LAST = 4
-NOECHO = 0
-PRODUCT = 'NCS'
-UPGRADE_KILL_ON_TIMEOUT = 1
-```
diff --git a/developer-reference/pyapi/_ncs.md b/developer-reference/pyapi/_ncs.md
deleted file mode 100644
index cda0def3..00000000
--- a/developer-reference/pyapi/_ncs.md
+++ /dev/null
@@ -1,2179 +0,0 @@
-# \_ncs Module
-
-NCS Python low level module.
-
-This module and its submodules provide Python bindings for the C APIs, described by the [confd\_lib(3)](../../resources/man/confd_lib.3.md) man page.
-
-The companion high level module, ncs, provides an abstraction layer on top of this module and may be easier to use.
-
-## Submodules
-
-* [\_ncs.cdb](_ncs.cdb.md): Low level module for connecting to NCS built-in XML database (CDB).
-* [\_ncs.dp](_ncs.dp.md): Low level callback module for connecting data providers to NCS.
-* [\_ncs.error](_ncs.error.md): This module defines new NCS Python API exception classes.
-* [\_ncs.events](_ncs.events.md): Low level module for subscribing to NCS event notifications.
-* [\_ncs.ha](_ncs.ha.md): Low level module for connecting to NCS HA subsystem.
-* [\_ncs.maapi](_ncs.maapi.md): Low level module for connecting to NCS with a read/write interface inside transactions.
-
-## Functions
-
-### cs\_node\_cd
-
-```python
-cs_node_cd(start, path) -> Union[CsNode, None]
-```
-
-Utility function which finds the resulting CsNode given an (optional) starting node and a (relative or absolute) string keypath.
-
-Keyword arguments:
-
-* start -- a CsNode instance or None
-* path -- the path
-
-### decrypt
-
-```python
-decrypt(ciphertext) -> str
-```
-
-When data is read over the CDB interface, the MAAPI interface or received in event notifications, the data for the builtin types tailf:aes-cfb-128-encrypted-string and tailf:aes-256-cfb-128-encrypted-string is encrypted. This function decrypts ciphertext and returns the clear text as a string.
-
-Keyword arguments:
-
-* ciphertext -- encrypted string
-
-### expr\_op2str
-
-```python
-expr_op2str(op) -> str
-```
-
-Convert confd\_expr\_op value to a string.
-
-Keyword arguments:
-
-* op -- confd\_expr\_op integer value
-
-### fatal
-
-```python
-fatal(str) -> None
-```
-
-Utility function which formats a string, prints it to stderr and exits with exit code 1. This function will never return.
-
-Keyword arguments:
-
-* str -- a message string
-
-### find\_cs\_node
-
-```python
-find_cs_node(hkeypath, len) -> Union[CsNode, None]
-```
-
-Utility function which finds the CsNode corresponding to the len first elements of the hashed keypath. To make the search consider the full keypath leave out the len parameter.
-
-Keyword arguments:
-
-* hkeypath -- a HKeypathRef instance
-* len -- number of elements to return (optional)
-
-### find\_cs\_node\_child
-
-```python
-find_cs_node_child(parent, xmltag) -> Union[CsNode, None]
-```
-
-Utility function which finds the CsNode corresponding to the child node given as xmltag.
-
-See confd\_find\_cs\_node\_child() in [confd\_lib\_lib(3)](../../resources/man/confd_lib_lib.3.md).
-
-Keyword arguments:
-
-* parent -- the parent CsNode
-* xmltag -- the child node
-
-### find\_cs\_root
-
-```python
-find_cs_root(ns) -> Union[CsNode, None]
-```
-
-When schema information is available to the library, this function returns the root of the tree representaton of the namespace given by ns for the (first) toplevel node. For namespaces that are augmented into other namespaces such that they do not have a toplevel node, this function returns None - the nodes of such a namespace are found below the augment target node(s) in other tree(s).
-
-Keyword arguments:
-
-* ns -- the namespace id
-
-### find\_ns\_type
-
-```python
-find_ns_type(nshash, name) -> Union[CsType, None]
-```
-
-Returns a CsType type definition for the type named name, which is defined in the namespace identified by nshash, or None if the type could not be found. If nshash is 0, the type name will be looked up among the built-in types (i.e. the YANG built-in types, the types defined in the YANG "tailf-common" module, and the types defined in the "confd" and "xs" namespaces).
-
-Keyword arguments:
-
-* nshash -- a namespace hash or 0 (0 searches for built-in types)
-* name -- the name of the type
-
-### get\_leaf\_list\_type
-
-```python
-get_leaf_list_type(node) -> CsType
-```
-
-For a leaf-list node, the type() method in the CsNodeInfo identifies a "list type" for the leaf-list "itself". This function returns the type of the elements in the leaf-list, i.e. corresponding to the type substatement for the leaf-list in the YANG module.
-
-Keyword arguments:
-
-* node -- The CsNode of the leaf-list
-
-### get\_nslist
-
-```python
-get_nslist() -> list
-```
-
-Provides a list of the namespaces known to the library as a list of five-tuples. Each tuple contains the the namespace hash (int), the prefix (string), the namespace uri (string), the revision (string), and the module name (string).
-
-If schemas are not loaded an empty list will be returned.
-
-### hash2str
-
-```python
-hash2str(hash) -> Union[str, None]
-```
-
-Returns a string representing the node name given by hash, or None if the hash value is not found. Requires that schema information has been loaded from the NCS daemon into the library - otherwise it always returns None.
-
-Keyword arguments:
-
-* hash -- a hash
-
-### hkeypath\_dup
-
-```python
-hkeypath_dup(hkeypath) -> HKeypathRef
-```
-
-Duplicates a HKeypathRef object.
-
-Keyword arguments:
-
-* hkeypath -- a HKeypathRef instance
-
-### hkeypath\_dup\_len
-
-```python
-hkeypath_dup_len(hkeypath, len) -> HKeypathRef
-```
-
-Duplicates the first len elements of hkeypath.
-
-Keyword arguments:
-
-* hkeypath -- a HKeypathRef instance
-* len -- number of elements to include in the copy
-
-### hkp\_prefix\_tagmatch
-
-```python
-hkp_prefix_tagmatch(hkeypath, tags) -> bool
-```
-
-A simplified version of hkp\_tagmatch() - it returns True if the tagpath matches a prefix of the hkeypath, i.e. it is equivalent to calling hkp\_tagmatch() and checking if the return value includes CONFD\_HKP\_MATCH\_TAGS.
-
-Keyword arguments:
-
-* hkeypath -- a HKeypathRef instance
-* tags -- a list of XmlTag instances
-
-### hkp\_tagmatch
-
-```python
-hkp_tagmatch(hkeypath, tags) -> int
-```
-
-When checking the hkeypaths that get passed into each iteration in e.g. cdb\_diff\_iterate() we can either explicitly check the paths, or use this function to do the job. The tags list (typically statically initialized) specifies a tagpath to match against the hkeypath. See cdb\_diff\_match().
-
-Keyword arguments:
-
-* hkeypath -- a HKeypathRef instance
-* tags -- a list of XmlTag instances
-
-### init
-
-```python
-init(name, file, level) -> None
-```
-
-Initializes the ConfD library. Must be called before any other NCS API functions are called. There should be no need to call this function directly. It is called internally when the Python module is loaded.
-
-Keyword arguments:
-
-* name -- e
-* file -- (optional)
-* level -- (optional)
-
-### internal\_connect
-
-```python
-internal_connect(id, sock, ip, port, path) -> None
-```
-
-Internal function used by NCS Python VM.
-
-### list\_filter\_type2str
-
-```python
-list_filter_type2str(op) -> str
-```
-
-Convert confd\_list\_filter\_type value to a string.
-
-Keyword arguments:
-
-* type -- confd\_list\_filter\_type integer value
-
-### max\_object\_size
-
-```python
-max_object_size(object) -> int
-```
-
-Utility function which returns the maximum size (i.e. the needed length of the confd\_value\_t array) for an "object" retrieved by cdb\_get\_object(), maapi\_get\_object(), and corresponding multi-object functions.
-
-Keyword arguments:
-
-* object -- the CsNode
-
-### mmap\_schemas
-
-```python
-mmap_schemas(filename) -> None
-```
-
-If shared memory schema support has been enabled, this function will will map a shared memory segment into the current process address space and make it ready for use.
-
-The filename can be obtained by using the get\_schema\_file\_path() function
-
-The filename argument specifies the pathname of the file that is used as backing store.
-
-Keyword arguments:
-
-* filename -- a filename string
-
-### next\_object\_node
-
-```python
-next_object_node(object, cur, value) -> Union[CsNode, None]
-```
-
-Utility function to allow navigation of the confd\_cs\_node schema tree in parallel with the confd\_value\_t array populated by cdb\_get\_object(), maapi\_get\_object(), and corresponding multi-object functions.
-
-The cur parameter is the CsNode for the current value, and the value parameter is the current value in the array. The function returns a CsNode for the next value in the array, or None when the complete object has been traversed. In the initial call for a given traversal, we must pass self.children() for the cur parameter - this always points to the CsNode for the first value in the array.
-
-Keyword arguments:
-
-* object -- CsNode of the list container node
-* cur -- The CsNode of the current value
-* value -- The current value
-
-### ns2prefix
-
-```python
-ns2prefix(ns) -> Union[str, None]
-```
-
-Returns a string giving the namespace prefix for the namespace ns, if the namespace is known to the library - otherwise it returns None.
-
-Keyword arguments:
-
-* ns -- a namespace hash
-
-### pp\_kpath
-
-```python
-pp_kpath(hkeypath) -> str
-```
-
-Utility function which pretty prints a string representation of the path hkeypath. This will use the NCS curly brace notation, i.e. "/servers/server{www}/ip". Requires that schema information is available to the library.
-
-Keyword arguments:
-
-* hkeypath -- a HKeypathRef instance
-
-### pp\_kpath\_len
-
-```python
-pp_kpath_len(hkeypath, len) -> str
-```
-
-A variant of pp\_kpath() that prints only the first len elements of hkeypath.
-
-Keyword arguments:
-
-* hkeypath -- a \_lib.HKeypathRef instance
-* len -- number of elements to print
-
-### set\_debug
-
-```python
-set_debug(level, file) -> None
-```
-
-Sets the debug level
-
-Keyword arguments:
-
-* file -- (optional)
-* level -- (optional)
-
-### set\_kill\_child\_on\_parent\_exit
-
-```python
-set_kill_child_on_parent_exit() -> bool
-```
-
-Instruct the operating system to kill this process if the parent process exits.
-
-### str2hash
-
-```python
-str2hash(str) -> int
-```
-
-Returns the hash value representing the node name given by str, or 0 if the string is not found. Requires that schema information has been loaded from the NCS daemon into the library - otherwise it always returns 0.
-
-Keyword arguments:
-
-* str -- a name string
-
-### stream\_connect
-
-```python
-stream_connect(sock, id, flags, ip, port, path) -> None
-```
-
-Connects a stream socket to NCS.
-
-Keyword arguments:
-
-* sock -- a Python socket instance
-* id -- id
-* flags -- flags
-* ip -- ip address - if sock family is AF\_INET or AF\_INET6 (optional)
-* port -- port - if sock family is AF\_INET or AF\_INET6 (optional)
-* path -- a filename - if sock family is AF\_UNIX (optional)
-
-### xpath\_pp\_kpath
-
-```python
-xpath_pp_kpath(hkeypath) -> str
-```
-
-Utility function which pretty prints a string representation of the path hkeypath. This will format the path as an XPath, i.e. "/servers/server\[name="www"']/ip". Requires that schema information is available to the library.
-
-Keyword arguments:
-
-* hkeypath -- a HKeypathRef instance
-
-## Classes
-
-### _class_ **AttrValue**
-
-This type represents the c-type confd\_attr\_value\_t.
-
-The contructor for this type has the following signature:
-
-AttrValue(attr, v) -> object
-
-Keyword arguments:
-
-* attr -- attribute type
-* v -- value
-
-Members:
-
-
-
-attr
-
-attribute type (int)
-
-
-
-
-
-v
-
-attribute value (Value)
-
-
-
-### _class_ **AuthorizationInfo**
-
-This type represents the c-type struct confd\_authorization\_info.
-
-AuthorizationInfo cannot be directly instantiated from Python.
-
-Members:
-
-
-
-groups
-
-authorization groups (list of strings)
-
-
-
-### _class_ **CsCase**
-
-This type represents the c-type struct confd\_cs\_case.
-
-CsCase cannot be directly instantiated from Python.
-
-Members:
-
-
-
-choices(...)
-
-Method:
-
-```python
-choices() -> Union[CsChoice, None]
-```
-
-Returns the CsCase choices.
-
-
-
-
-
-first(...)
-
-Method:
-
-```python
-first() -> Union[CsNode, None]
-```
-
-Returns the CsCase first.
-
-
-
-
-
-last(...)
-
-Method:
-
-```python
-last() -> Union[CsNode, None]
-```
-
-Returns the CsCase last.
-
-
-
-
-
-next(...)
-
-Method:
-
-```python
-next() -> Union[CsCase, None]
-```
-
-Returns the CsCase next.
-
-
-
-
-
-ns(...)
-
-Method:
-
-```python
-ns() -> int
-```
-
-Returns the CsCase ns hash.
-
-
-
-
-
-parent(...)
-
-Method:
-
-```python
-parent() -> Union[CsChoice, None]
-```
-
-Returns the CsCase parent.
-
-
-
-
-
-tag(...)
-
-Method:
-
-```python
-tag() -> int
-```
-
-Returns the CsCase tag hash.
-
-
-
-### _class_ **CsChoice**
-
-This type represents the c-type struct confd\_cs\_choice.
-
-CsChoice cannot be directly instantiated from Python.
-
-Members:
-
-
-
-case_parent(...)
-
-Method:
-
-```python
-case_parent() -> Union[CsCase, None]
-```
-
-Returns the CsChoice case parent.
-
-
-
-
-
-cases(...)
-
-Method:
-
-```python
-cases() -> Union[CsCase, None]
-```
-
-Returns the CsChoice cases.
-
-
-
-
-
-default_case(...)
-
-Method:
-
-```python
-default_case() -> Union[CsCase, None]
-```
-
-Returns the CsChoice default case.
-
-
-
-
-
-min_occurs(...)
-
-Method:
-
-```python
-min_occurs() -> int
-```
-
-Returns the CsChoice minOccurs.
-
-
-
-
-
-next(...)
-
-Method:
-
-```python
-next() -> Union[CsChoice, None]
-```
-
-Returns the CsChoice next.
-
-
-
-
-
-ns(...)
-
-Method:
-
-```python
-ns() -> int
-```
-
-Returns the CsChoice ns hash.
-
-
-
-
-
-parent(...)
-
-Method:
-
-```python
-parent() -> Union[CsNode, None]
-```
-
-Returns the CsChoice parent CsNode.
-
-
-
-
-
-tag(...)
-
-Method:
-
-```python
-tag() -> int
-```
-
-Returns the CsChoice tag hash.
-
-
-
-### _class_ **CsNode**
-
-This type represents the c-type struct confd\_cs\_node.
-
-CsNode cannot be directly instantiated from Python.
-
-Members:
-
-
-
-children(...)
-
-Method:
-
-```python
-children() -> Union[CsNode, None]
-```
-
-Returns the children CsNode or None.
-
-
-
-
-
-has_display_when(...)
-
-Method:
-
-```python
-has_display_when() -> bool
-```
-
-Returns True if CsNode has YANG 'tailf:display-when' statement(s).
-
-
-
-
-
-has_when(...)
-
-Method:
-
-```python
-has_when() -> bool
-```
-
-Returns True if CsNode has YANG 'when' statement(s).
-
-
-
-
-
-info(...)
-
-Method:
-
-```python
-info() -> CsNodeInfo
-```
-
-Returns a CsNodeInfo.
-
-
-
-
-
-is_action(...)
-
-Method:
-
-```python
-is_action() -> bool
-```
-
-Returns True if CsNode is an action.
-
-
-
-
-
-is_action_param(...)
-
-Method:
-
-```python
-is_action_param() -> bool
-```
-
-Returns True if CsNode is an action parameter.
-
-
-
-
-
-is_action_result(...)
-
-Method:
-
-```python
-is_action_result() -> bool
-```
-
-Returns True if CsNode is an action result.
-
-
-
-
-
-is_case(...)
-
-Method:
-
-```python
-is_case() -> bool
-```
-
-Returns True if CsNode is a case.
-
-
-
-
-
-is_container(...)
-
-Method:
-
-```python
-is_container() -> bool
-```
-
-Returns True if CsNode is a container.
-
-
-
-
-
-is_empty_leaf(...)
-
-Method:
-
-```python
-is_empty_leaf() -> bool
-```
-
-Returns True if CsNode is a leaf which is empty.
-
-
-
-
-
-is_key(...)
-
-Method:
-
-```python
-is_key() -> bool
-```
-
-Returns True if CsNode is a key.
-
-
-
-
-
-is_leaf(...)
-
-Method:
-
-```python
-is_leaf() -> bool
-```
-
-Returns True if CsNode is a leaf.
-
-
-
-
-
-is_leaf_list(...)
-
-Method:
-
-```python
-is_leaf_list() -> bool
-```
-
-Returns True if CsNode is a leaf-list.
-
-
-
-
-
-is_leafref(...)
-
-Method:
-
-```python
-is_leafref() -> bool
-```
-
-Returns True if CsNode is a YANG 'leafref'.
-
-
-
-
-
-is_list(...)
-
-Method:
-
-```python
-is_list() -> bool
-```
-
-Returns True if CsNode is a list.
-
-
-
-
-
-is_mount_point(...)
-
-Method:
-
-```python
-is_mount_point() -> bool
-```
-
-Returns True if CsNode is a mount point.
-
-
-
-
-
-is_non_empty_leaf(...)
-
-Method:
-
-```python
-is_non_empty_leaf() -> bool
-```
-
-Returns True if CsNode is a leaf which is not of type empty.
-
-
-
-
-
-is_notif(...)
-
-Method:
-
-```python
-is_notif() -> bool
-```
-
-Returns True if CsNode is a notification.
-
-
-
-
-
-is_np_container(...)
-
-Method:
-
-```python
-is_np_container() -> bool
-```
-
-Returns True if CsNode is a non presence container.
-
-
-
-
-
-is_oper(...)
-
-Method:
-
-```python
-is_oper() -> bool
-```
-
-Returns True if CsNode is OPER data.
-
-
-
-
-
-is_p_container(...)
-
-Method:
-
-```python
-is_p_container() -> bool
-```
-
-Returns True if CsNode is a presence container.
-
-
-
-
-
-is_union(...)
-
-Method:
-
-```python
-is_union() -> bool
-```
-
-Returns True if CsNode is a union.
-
-
-
-
-
-is_writable(...)
-
-Method:
-
-```python
-is_writable() -> bool
-```
-
-Returns True if CsNode is writable.
-
-
-
-
-
-next(...)
-
-Method:
-
-```python
-next() -> Union[CsNode, None]
-```
-
-Returns the next CsNode or None.
-
-
-
-
-
-ns(...)
-
-Method:
-
-```python
-ns() -> int
-```
-
-Returns the namespace value.
-
-
-
-
-
-parent(...)
-
-Method:
-
-```python
-parent() -> Union[CsNode, None]
-```
-
-Returns the parent CsNode or None.
-
-
-
-
-
-tag(...)
-
-Method:
-
-```python
-tag() -> int
-```
-
-Returns the tag value.
-
-
-
-### _class_ **CsNodeInfo**
-
-This type represents the c-type struct confd\_cs\_node\_info.
-
-CsNodeInfo cannot be directly instantiated from Python.
-
-Members:
-
-
-
-choices(...)
-
-Method:
-
-```python
-choices() -> Union[CsChoice, None]
-```
-
-Returns CsNodeInfo choices.
-
-
-
-
-
-cmp(...)
-
-Method:
-
-```python
-cmp() -> int
-```
-
-Returns CsNodeInfo cmp.
-
-
-
-
-
-defval(...)
-
-Method:
-
-```python
-defval() -> Value
-```
-
-Returns CsNodeInfo value.
-
-
-
-
-
-flags(...)
-
-Method:
-
-```python
-flags() -> int
-```
-
-Returns CsNodeInfo flags.
-
-
-
-
-
-keys(...)
-
-Method:
-
-```python
-keys() -> List[int]
-```
-
-Returns a list of hashed key values.
-
-
-
-
-
-max_occurs(...)
-
-Method:
-
-```python
-max_occurs() -> int
-```
-
-Returns CsNodeInfo max\_occurs.
-
-
-
-
-
-meta_data(...)
-
-Method:
-
-```python
-meta_data() -> Union[Dict, None]
-```
-
-Returns CsNodeInfo meta\_data.
-
-
-
-
-
-min_occurs(...)
-
-Method:
-
-```python
-min_occurs() -> int
-```
-
-Returns CsNodeInfo min\_occurs.
-
-
-
-
-
-shallow_type(...)
-
-Method:
-
-```python
-shallow_type() -> int
-```
-
-Returns CsNodeInfo shallow\_type.
-
-
-
-
-
-type(...)
-
-Method:
-
-```python
-type() -> int
-```
-
-Returns CsNodeInfo type.
-
-
-
-### _class_ **CsType**
-
-This type represents the c-type struct confd\_type.
-
-CsType cannot be directly instantiated from Python.
-
-Members:
-
-
-
-bitbig_size(...)
-
-Method:
-
-```python
-bitbig_size() -> int
-```
-
-Returns the maximum size needed for the byte array for the BITBIG value when a YANG bits type has a highest position above 63. If this is not a BITBIG value or if the highest position is 63 or less, this function will return 0.
-
-
-
-
-
-defval(...)
-
-Method:
-
-```python
-defval() -> Union[CsType, None]
-```
-
-Returns the CsType defval.
-
-
-
-
-
-parent(...)
-
-Method:
-
-```python
-parent() -> Union[CsType, None]
-```
-
-Returns the CsType parent.
-
-
-
-### _class_ **DateTime**
-
-This type represents the c-type struct confd\_datetime.
-
-The contructor for this type has the following signature:
-
-DateTime(year, month, day, hour, min, sec, micro, timezone, timezone\_minutes) -> object
-
-Keyword arguments:
-
-* year -- the year (int)
-* month -- the month (int)
-* day -- the day (int)
-* hour -- the hour (int)
-* min -- minutes (int)
-* sec -- seconds (int)
-* micro -- micro seconds (int)
-* timezone -- the timezone (int)
-* timezone\_minutes -- number of timezone\_minutes (int)
-
-Members:
-
-
-
-day
-
-the day
-
-
-
-
-
-hour
-
-the hour
-
-
-
-
-
-micro
-
-micro seconds
-
-
-
-
-
-min
-
-minutes
-
-
-
-
-
-month
-
-the month
-
-
-
-
-
-sec
-
-seconds
-
-
-
-
-
-timezone
-
-timezone
-
-
-
-
-
-timezone_minutes
-
-timezone minutes
-
-
-
-
-
-year
-
-the year
-
-
-
-### _class_ **HKeypathRef**
-
-This type represents the c-type confd\_hkeypath\_t.
-
-HKeypathRef implements some sequence methods which enables indexing, iteration and length checking. There is also support for slicing, e.g:
-
-Lets say the variable hkp is a valid hkeypath pointing to '/foo/bar{a}/baz' and we slice that object like this:
-
-```
-newhkp = hkp[1:]
-```
-
-In this case newhkp will be a new hkeypath pointing to '/foo/bar{a}'. Note that the last element must always be included, so trying to create a slice with hkp\[1:2] will fail.
-
-The example above could also be written using the dup\_len() method:
-
-```
-newhkp = hkp.dup_len(3)
-```
-
-Retrieving an element of the HKeypathRef when the underlying Value is of type C\_XMLTAG returns a XmlTag instance. In all other cases a tuple of Values is returned.
-
-When receiving an HKeypathRef object as on argument in a callback method, the underlying object is only borrowed, so this particular instance is only valid inside that callback method. If one, for some reason, would like to keep the HKeypathRef object 'alive' for any longer than that, use dup() or dup\_len() to get a copy of it. Slicing also creates a copy.
-
-HKeypathRef cannot be directly instantiated from Python.
-
-Members:
-
-
-
-dup(...)
-
-Method:
-
-```python
-dup() -> HKeypathRef
-```
-
-Duplicates this hkeypath.
-
-
-
-
-
-dup_len(...)
-
-Method:
-
-```python
-dup_len(len) -> HKeypathRef
-```
-
-Duplicates the first len elements of this hkeypath.
-
-Keyword arguments:
-
-* len -- number of elements to include in the copy
-
-
-
-### _class_ **ProgressLink**
-
-This type represents the c-type struct confd\_progress\_link.
-
-confdProgressLink cannot be directly instantiated from Python.
-
-Members:
-
-
-
-span_id
-
-span id (string)
-
-
-
-
-
-trace_id
-
-trace id (string)
-
-
-
-### _class_ **QueryResult**
-
-This type represents the c-type struct confd\_query\_result.
-
-QueryResult implements some sequence methods which enables indexing, iteration and length checking.
-
-QueryResult cannot be directly instantiated from Python.
-
-Members:
-
-
-
-nelements
-
-number of elements (int)
-
-
-
-
-
-nresults
-
-number of results (int)
-
-
-
-
-
-offset
-
-the offset (int)
-
-
-
-
-
-type
-
-the query result type (int)
-
-
-
-### _class_ **SnmpVarbind**
-
-This type represents the c-type struct confd\_snmp\_varbind.
-
-The contructor for this type has the following signature:
-
-SnmpVarbind(type, val, vartype, name, oid, cr) -> object
-
-Keyword arguments:
-
-* type -- SNMP\_VARIABLE, SNMP\_OID or SNMP\_COL\_ROW (int)
-* val -- value (Value)
-* vartype -- snmp type (optional)
-* name -- mandatory if type is SNMP\_VARIABLE (string)
-* oid -- mandatory if type is SNMP\_OID (list of integers)
-* cr -- mandatory if type is SNMP\_COL\_ROW (described below)
-
-When type is SNMP\_COL\_ROW the cr argument must be provided. It is built up as a 2-tuple like this: tuple(string, list(int)).
-
-The first element of the 2-tuple is the column name.
-
-The second element (the row index) is a list of up to 128 integers.
-
-Members:
-
-
-
-type
-
-the SnmpVarbind type
-
-
-
-### _class_ **TagValue**
-
-This type represents the c-type confd\_tag\_value\_t.
-
-In addition to the 'ns' and 'tag' attributes there is an additional attribute 'v' which containes the Value object.
-
-The contructor for this type has the following signature:
-
-TagValue(xmltag, v, tag, ns) -> object
-
-There are two ways to contruct this object. The first one requires that both xmltag and v are specified. The second one requires that both tag and ns are specified.
-
-Keyword arguments:
-
-* xmltag -- a XmlTag instance (optional)
-* v -- a Value instance (optional)
-* tag -- tag hash (optional)
-* ns -- namespace hash (optional)
-
-Members:
-
-
-
-ns
-
-namespace hash
-
-
-
-
-
-tag
-
-tag hash
-
-
-
-### _class_ **TransCtxRef**
-
-This type represents the c-type struct confd\_trans\_ctx.
-
-Available attributes:
-
-* fd -- worker socket (int)
-* th -- transaction handle (int)
-* secondary\_index -- secondary index number for list traversal (int)
-* username -- from user session (string) DEPRECATED, see uinfo
-* context -- from user session (string) DEPRECATED, see uinfo
-* uinfo -- user session (UserInfo)
-* accumulated -- if the data provider is using the accumulate functionality this attribute will contain the first dp.TrItemRef object in the linked list, otherwise if will be None
-* traversal\_id -- unique id for the get\_next\* invocation
-
-TransCtxRef cannot be directly instantiated from Python.
-
-Members:
-
-_None_
-
-### _class_ **UserInfo**
-
-This type represents the c-type struct confd\_user\_info.
-
-UserInfo cannot be directly instantiated from Python.
-
-Members:
-
-
-
-actx_thandle
-
-actx\_thandle -- action context transaction handle
-
-
-
-
-
-addr
-
-addr -- ip address (string)
-
-
-
-
-
-af
-
-af -- address family AF\_INIT or AF\_INET6 (int)
-
-
-
-
-
-clearpass
-
-clearpass -- password if available (string)
-
-
-
-
-
-context
-
-context -- the context (string)
-
-
-
-
-
-flags
-
-flags -- CONFD\_USESS\_FLAG\_... (int)
-
-
-
-
-
-lmode
-
-lmode -- the lock we have (int)
-
-
-
-
-
-logintime
-
-logintime -- time for login (long)
-
-
-
-
-
-port
-
-port -- source port (int)
-
-
-
-
-
-proto
-
-proto -- protocol (int)
-
-
-
-
-
-snmp_v3_ctx
-
-snmp\_v3\_ctx -- SNMP context (string)
-
-
-
-
-
-username
-
-username -- the username (string)
-
-
-
-
-
-usid
-
-usid -- user session id (int)
-
-
-
-### _class_ **Value**
-
-This type represents the c-type confd\_value\_t.
-
-The contructor for this type has the following signature:
-
-Value(init, type) -> object
-
-If type is not provided it will be automatically set by inspecting the type of argument init according to this table:
-
-| Python type | Value type |
-| ----------- | ---------- |
-| bool | C\_BOOL |
-| int | C\_INT32 |
-| long | C\_INT64 |
-| float | C\_DOUBLE |
-| string | C\_BUF |
-
-If any other type is provided for the init argument, the type will be set to C\_BUF and the value will be the string representation of init.
-
-For types C\_XMLTAG, C\_XMLBEGIN and C\_XMLEND the init argument must be a 2-tuple which specifies the ns and tag values like this: (ns, tag).
-
-For type C\_IDENTITYREF the init argument must be a 2-tuple which specifies the ns and id values like this: (ns, id).
-
-For types C\_IPV4, C\_IPV6, C\_DATETIME, C\_DATE, C\_TIME, C\_DURATION, C\_OID, C\_IPV4PREFIX and C\_IPV6PREFIX, the init argument must be a string.
-
-For type C\_DECIMAL64 the init argument must be a string, or a 2-tuple which specifies value and fraction digits like this: (value, fraction\_digits).
-
-For type C\_BINARY the init argument must be a bytes instance.
-
-Keyword arguments:
-
-* init -- the initial value
-* type -- type (optional, see confd\_types(3))
-
-Members:
-
-
-
-as_decimal64(...)
-
-Method:
-
-```python
-as_decimal64() -> Tuple[int, int]
-```
-
-Returns a tuple containing (value, fraction\_digits) if this value is of type C\_DECIMAL64.
-
-
-
-
-
-as_list(...)
-
-Method:
-
-```python
-as_list() -> list
-```
-
-Returns a list of Value's if this value is of type C\_LIST.
-
-
-
-
-
-as_pyval(...)
-
-Method:
-
-```python
-as_pyval() -> Any
-```
-
-Tries to convert a Value to a native Python type. If possible the object returned will be of the same type as used when initializing a Value object. If the type cannot be represented as something useful in Python a string will be returned. Note that not all Value types are supported.
-
-E.g. assuming you already have a value object, this should be possible in most cases:
-
-newvalue = Value(value.as\_pyval(), value.confd\_type())
-
-
-
-
-
-as_xmltag(...)
-
-Method:
-
-```python
-as_xmltag() -> XmlTag
-```
-
-Returns a XmlTag instance if this value is of type C\_XMLTAG.
-
-
-
-
-
-confd_type(...)
-
-Method:
-
-```python
-confd_type() -> int
-```
-
-Returns the confd type.
-
-
-
-
-
-confd_type_str(...)
-
-Method:
-
-```python
-confd_type_str() -> str
-```
-
-Returns a string representation for the Value type.
-
-
-
-
-
-str2val(...)
-
-Class method:
-
-```python
-str2val(value, schema_type) -> Value
-(class method)
-```
-
-Create and return a Value from a string. The schema\_type argument must be either a 2-tuple with namespace and keypath, a CsNode instance or a CsType instance.
-
-Keyword arguments:
-
-* value -- string value
-* schema\_type -- either (ns, keypath), a CsNode or a CsType
-
-
-
-
-
-val2str(...)
-
-Method:
-
-```python
-val2str(schema_type) -> str
-```
-
-Return a string representation of Value. The schema\_type argument must be either a 2-tuple with namespace and keypath, a CsNode instance or a CsType instance.
-
-Keyword arguments:
-
-* schema\_type -- either (ns, keypath), a CsNode or a CsType
-
-
-
-### _class_ **XmlTag**
-
-This type represent the c-type struct xml\_tag.
-
-The contructor for this type has the following signature:
-
-XmlTag(ns, tag) -> object
-
-Keyword arguments:
-
-* ns -- namespace hash
-* tag -- tag hash
-
-Members:
-
-
-
-ns
-
-namespace hash value (unsigned int)
-
-
-
-
-
-tag
-
-tag hash value (unsigned int)
-
-
-
-## Predefined Values
-
-```python
-
-ACCUMULATE = 1
-ADDR = '127.0.0.1'
-ALREADY_LOCKED = -4
-ATTR_ANNOTATION = 2147483649
-ATTR_BACKPOINTER = 2147483651
-ATTR_INACTIVE = 0
-ATTR_ORIGIN = 2147483655
-ATTR_ORIGINAL_VALUE = 2147483653
-ATTR_OUT_OF_BAND = 2147483664
-ATTR_REFCOUNT = 2147483650
-ATTR_TAGS = 2147483648
-ATTR_WHEN = 2147483652
-CANDIDATE = 1
-CMP_EQ = 1
-CMP_GT = 3
-CMP_GTE = 4
-CMP_LT = 5
-CMP_LTE = 6
-CMP_NEQ = 2
-CMP_NOP = 0
-CONFD_EOF = -2
-CONFD_ERR = -1
-CONFD_OK = 0
-CONFD_PORT = 4565
-CS_NODE_CMP_NORMAL = 0
-CS_NODE_CMP_SNMP = 1
-CS_NODE_CMP_SNMP_IMPLIED = 2
-CS_NODE_CMP_UNSORTED = 4
-CS_NODE_CMP_USER = 3
-CS_NODE_HAS_DISPLAY_WHEN = 1024
-CS_NODE_HAS_META_DATA = 2048
-CS_NODE_HAS_MOUNT_POINT = 32768
-CS_NODE_HAS_WHEN = 512
-CS_NODE_IS_ACTION = 8
-CS_NODE_IS_CASE = 128
-CS_NODE_IS_CDB = 4
-CS_NODE_IS_CONTAINER = 256
-CS_NODE_IS_DYN = 1
-CS_NODE_IS_LEAFREF = 16384
-CS_NODE_IS_LEAF_LIST = 8192
-CS_NODE_IS_LIST = 1
-CS_NODE_IS_NOTIF = 64
-CS_NODE_IS_PARAM = 16
-CS_NODE_IS_RESULT = 32
-CS_NODE_IS_STRING_AS_BINARY = 65536
-CS_NODE_IS_WRITE = 2
-CS_NODE_IS_WRITE_ALL = 4096
-C_BINARY = 39
-C_BIT32 = 29
-C_BIT64 = 30
-C_BITBIG = 50
-C_BOOL = 17
-C_BUF = 5
-C_CDBBEGIN = 37
-C_DATE = 20
-C_DATETIME = 19
-C_DECIMAL64 = 43
-C_DEFAULT = 42
-C_DOUBLE = 14
-C_DQUAD = 46
-C_DURATION = 27
-C_EMPTY = 53
-C_ENUM_HASH = 28
-C_ENUM_VALUE = 28
-C_HEXSTR = 47
-C_IDENTITYREF = 44
-C_INT16 = 7
-C_INT32 = 8
-C_INT64 = 9
-C_INT8 = 6
-C_IPV4 = 15
-C_IPV4PREFIX = 40
-C_IPV4_AND_PLEN = 48
-C_IPV6 = 16
-C_IPV6PREFIX = 41
-C_IPV6_AND_PLEN = 49
-C_LIST = 31
-C_NOEXISTS = 1
-C_OBJECTREF = 34
-C_OID = 38
-C_PTR = 36
-C_QNAME = 18
-C_STR = 4
-C_SYMBOL = 3
-C_TIME = 23
-C_UINT16 = 11
-C_UINT32 = 12
-C_UINT64 = 13
-C_UINT8 = 10
-C_UNION = 35
-C_XMLBEGIN = 32
-C_XMLBEGINDEL = 45
-C_XMLEND = 33
-C_XMLMOVEAFTER = 52
-C_XMLMOVEFIRST = 51
-C_XMLTAG = 2
-DB_INVALID = 0
-DB_VALID = 1
-DEBUG = 1
-DELAYED_RESPONSE = 2
-EOF = -2
-ERR = -1
-ERRCODE_ACCESS_DENIED = 3
-ERRCODE_APPLICATION = 4
-ERRCODE_APPLICATION_INTERNAL = 5
-ERRCODE_DATA_MISSING = 8
-ERRCODE_INCONSISTENT_VALUE = 2
-ERRCODE_INTERNAL = 7
-ERRCODE_INTERRUPT = 9
-ERRCODE_IN_USE = 0
-ERRCODE_PROTO_USAGE = 6
-ERRCODE_RESOURCE_DENIED = 1
-ERRINFO_KEYPATH = 0
-ERRINFO_STRING = 1
-ERR_ABORTED = 49
-ERR_ACCESS_DENIED = 3
-ERR_ALREADY_EXISTS = 2
-ERR_APPLICATION_INTERNAL = 39
-ERR_BADPATH = 8
-ERR_BADSTATE = 17
-ERR_BADTYPE = 5
-ERR_BAD_CONFIG = 36
-ERR_BAD_KEYREF = 14
-ERR_CLI_CMD = 59
-ERR_DATA_MISSING = 58
-ERR_EOF = 45
-ERR_EXTERNAL = 19
-ERR_HA_ABORT = 71
-ERR_HA_BADCONFIG = 69
-ERR_HA_BADFXS = 27
-ERR_HA_BADNAME = 29
-ERR_HA_BADTOKEN = 28
-ERR_HA_BADVSN = 52
-ERR_HA_BIND = 30
-ERR_HA_CLOSED = 26
-ERR_HA_CONNECT = 25
-ERR_HA_NOTICK = 31
-ERR_HA_WITH_UPGRADE = 47
-ERR_INCONSISTENT_VALUE = 38
-ERR_INTERNAL = 18
-ERR_INUSE = 11
-ERR_INVALID_INSTANCE = 43
-ERR_LIB_NOT_INITIALIZED = 34
-ERR_LOCKED = 10
-ERR_MALLOC = 20
-ERR_MISSING_INSTANCE = 42
-ERR_MUST_FAILED = 41
-ERR_NOEXISTS = 1
-ERR_NON_UNIQUE = 13
-ERR_NOSESSION = 22
-ERR_NOSTACK = 9
-ERR_NOTCREATABLE = 6
-ERR_NOTDELETABLE = 7
-ERR_NOTMOVABLE = 46
-ERR_NOTRANS = 61
-ERR_NOTSET = 12
-ERR_NOT_IMPLEMENTED = 51
-ERR_NOT_WRITABLE = 4
-ERR_NO_MOUNT_ID = 67
-ERR_OS = 24
-ERR_POLICY_COMPILATION_FAILED = 54
-ERR_POLICY_EVALUATION_FAILED = 55
-ERR_POLICY_FAILED = 53
-ERR_PROTOUSAGE = 21
-ERR_RESOURCE_DENIED = 37
-ERR_STALE_INSTANCE = 68
-ERR_START_FAILED = 57
-ERR_SUBAGENT_DOWN = 33
-ERR_TIMEOUT = 48
-ERR_TOOMANYTRANS = 23
-ERR_TOO_FEW_ELEMS = 15
-ERR_TOO_MANY_ELEMS = 16
-ERR_TOO_MANY_SESSIONS = 35
-ERR_TRANSACTION_CONFLICT = 70
-ERR_UNAVAILABLE = 44
-ERR_UNSET_CHOICE = 40
-ERR_UPGRADE_IN_PROGRESS = 60
-ERR_VALIDATION_WARNING = 32
-ERR_XPATH = 50
-EXEC_COMPARE = 13
-EXEC_CONTAINS = 11
-EXEC_DERIVED_FROM = 9
-EXEC_DERIVED_FROM_OR_SELF = 10
-EXEC_RE_MATCH = 8
-EXEC_STARTS_WITH = 7
-EXEC_STRING_COMPARE = 12
-FALSE = 0
-FIND_NEXT = 0
-FIND_SAME_OR_NEXT = 1
-HKP_MATCH_FULL = 3
-HKP_MATCH_HKP = 2
-HKP_MATCH_NONE = 0
-HKP_MATCH_TAGS = 1
-INTENDED = 7
-IN_USE = -5
-ITER_CONTINUE = 3
-ITER_RECURSE = 2
-ITER_STOP = 1
-ITER_SUSPEND = 4
-ITER_UP = 5
-ITER_WANT_ANCESTOR_DELETE = 2
-ITER_WANT_ATTR = 4
-ITER_WANT_CLI_ORDER = 1024
-ITER_WANT_CLI_STR = 8
-ITER_WANT_LEAF_FIRST_ORDER = 32
-ITER_WANT_LEAF_LAST_ORDER = 64
-ITER_WANT_PREV = 1
-ITER_WANT_P_CONTAINER = 256
-ITER_WANT_REVERSE = 128
-ITER_WANT_SCHEMA_ORDER = 16
-ITER_WANT_SUPPRESS_OPER_DEFAULTS = 2048
-LF_AND = 1
-LF_CMP = 3
-LF_CMP_LL = 7
-LF_EXEC = 5
-LF_EXISTS = 4
-LF_NOT = 2
-LF_OR = 0
-LF_ORIGIN = 6
-LIB_API_VSN = 134610944
-LIB_API_VSN_STR = '08060000'
-LIB_PROTO_VSN = 86
-LIB_PROTO_VSN_STR = '86'
-LIB_VSN = 134610944
-LIB_VSN_STR = '08060000'
-LISTENER_CLI = 8
-LISTENER_IPC = 1
-LISTENER_NETCONF = 2
-LISTENER_SNMP = 4
-LISTENER_WEBUI = 16
-LOAD_SCHEMA_HASH = 65536
-LOAD_SCHEMA_NODES = 1
-LOAD_SCHEMA_TYPES = 2
-MMAP_SCHEMAS_FIXED_ADDR = 2
-MMAP_SCHEMAS_KEEP_SIZE = 1
-MOP_ATTR_SET = 6
-MOP_CREATED = 1
-MOP_DELETED = 2
-MOP_MODIFIED = 3
-MOP_MOVED_AFTER = 5
-MOP_VALUE_SET = 4
-NCS_ERR_CONNECTION_CLOSED = 64
-NCS_ERR_CONNECTION_REFUSED = 56
-NCS_ERR_CONNECTION_TIMEOUT = 63
-NCS_ERR_DEVICE = 65
-NCS_ERR_SERVICE_CONFLICT = 62
-NCS_ERR_TEMPLATE = 66
-NCS_LISTENER_NETCONF_CALL_HOME = 32
-NCS_PORT = 4569
-NO_DB = 0
-OK = 0
-OPERATIONAL = 4
-PATH = None
-PORT = 4569
-PRE_COMMIT_RUNNING = 6
-PROGRESS_INFO = 3
-PROGRESS_START = 1
-PROGRESS_STOP = 2
-PROTO_CONSOLE = 4
-PROTO_HTTP = 6
-PROTO_HTTPS = 7
-PROTO_SSH = 2
-PROTO_SSL = 5
-PROTO_SYSTEM = 3
-PROTO_TCP = 1
-PROTO_TLS = 9
-PROTO_TRACE = 3
-PROTO_UDP = 8
-PROTO_UNKNOWN = 0
-QUERY_HKEYPATH = 1
-QUERY_HKEYPATH_VALUE = 2
-QUERY_STRING = 0
-QUERY_TAG_VALUE = 3
-READ = 1
-READ_WRITE = 2
-RUNNING = 2
-SERIAL_HKEYPATH = 2
-SERIAL_NONE = 0
-SERIAL_TAG_VALUE = 3
-SERIAL_VALUE_T = 1
-SILENT = 0
-SNMP_COL_ROW = 3
-SNMP_Counter32 = 6
-SNMP_Counter64 = 9
-SNMP_INTEGER = 1
-SNMP_Interger32 = 2
-SNMP_IpAddress = 5
-SNMP_NULL = 0
-SNMP_OBJECT_IDENTIFIER = 4
-SNMP_OCTET_STRING = 3
-SNMP_OID = 2
-SNMP_Opaque = 8
-SNMP_TimeTicks = 7
-SNMP_Unsigned32 = 10
-SNMP_VARIABLE = 1
-STARTUP = 3
-TIMEZONE_UNDEF = -111
-TRACE = 2
-TRANSACTION = 5
-TRANS_CB_FLAG_FILTERED = 1
-TRUE = 1
-USESS_FLAG_FORWARD = 1
-USESS_FLAG_HAS_IDENTIFICATION = 2
-USESS_FLAG_HAS_OPAQUE = 4
-USESS_LOCK_MODE_EXCLUSIVE = 2
-USESS_LOCK_MODE_NONE = 0
-USESS_LOCK_MODE_PRIVATE = 1
-USESS_LOCK_MODE_SHARED = 3
-VALIDATION_FLAG_COMMIT = 2
-VALIDATION_FLAG_TEST = 1
-VALIDATION_WARN = -3
-VERBOSITY_DEBUG = 3
-VERBOSITY_NORMAL = 0
-VERBOSITY_VERBOSE = 1
-VERBOSITY_VERY_VERBOSE = 2
-```
diff --git a/developer-reference/pyapi/modules.lst b/developer-reference/pyapi/modules.lst
deleted file mode 100644
index 321ef797..00000000
--- a/developer-reference/pyapi/modules.lst
+++ /dev/null
@@ -1,20 +0,0 @@
-ncs
-ncs.alarm
-ncs.application
-ncs.cdb
-ncs.dp
-ncs.experimental
-ncs.log
-ncs.maagic
-ncs.maapi
-ncs.progress
-ncs.service_log
-ncs.template
-ncs.util
-_ncs
-_ncs.cdb
-_ncs.dp
-_ncs.error
-_ncs.events
-_ncs.ha
-_ncs.maapi
diff --git a/developer-reference/pyapi/ncs.alarm.md b/developer-reference/pyapi/ncs.alarm.md
deleted file mode 100644
index b41a350c..00000000
--- a/developer-reference/pyapi/ncs.alarm.md
+++ /dev/null
@@ -1,235 +0,0 @@
-# Python ncs.alarm Module
-
-NCS Alarm Manager module.
-
-## Functions
-
-### clear_alarm
-
-```python
-clear_alarm(alarm)
-```
-
-Clear an alarm.
-
-Arguments:
- alarm -- An instance of Alarm.
-
-### managed_object_instance
-
-```python
-managed_object_instance(instanceval)
-```
-
-Create a managed object of type instance-identifier.
-
-Arguments:
- instanceval -- The instance-identifier (string or HKeypathRef)
-
-### managed_object_oid
-
-```python
-managed_object_oid(oidval)
-```
-
-Create a managed object of type yang:object-identifier.
-
-Arguments:
- oidval -- The OID (string)
-
-### managed_object_string
-
-```python
-managed_object_string(strval)
-```
-
-Create a managed object of type string.
-
-Arguments:
- strval --- The string value
-
-### raise_alarm
-
-```python
-raise_alarm(alarm)
-```
-
-Raise an alarm.
-
-Arguments:
- alarm -- An instance of Alarm.
-
-
-## Classes
-
-### _class_ **Alarm**
-
-Class representing an alarm.
-
-```python
-Alarm(managed_device, managed_object, alarm_type, specific_problem, severity, alarm_text, impacted_objects=None, related_alarms=None, root_cause_objects=None, time_stamp=None, custom_attributes=None)
-```
-
-Create an Alarm object.
-
-Arguments:
-managed_device
- The managed device this alarm is associated with. Plain string
- which identifies the device.
-managed_object
- The managed object this alarm is associated with. Also referred
- to as the "Alarming Object". This object may not be referred to
- in the root_cause_objects parameter. If an NCS Service
- generates an alarm based on an error state in a device used by
- that service, managed_object should be the service Id and the
- device should be included in the root_cause_objects list. This
- parameter must be a ncs.Value object. Use one of the methods
- managed_object_string(), managed_object_oid() or
- managed_object_instance() to create the value.
-alarm_type
- Type of alarm. This is a YANG identity. Alarm types are defined
- by the YANG developer and should be designed to be as specific
- as possible.
-specific_problem
- If the alarm_type isn't enough to describe the alarm, this
- field can be used in combination. Keep in mind that when
- dynamically adding a specific problem, there is no way for the
- operator to know in advance which alarms can be raised.
-severity
- State of the alarm; cleared, indeterminate, critical, major,
- minor, warning (enum).
-alarm_text
- A human readable description of this problem.
-impacted_objects
- A list of Managed Objects that may no longer function due to
- this alarm. Typically these point to NCS Services that are
- dependent on the objects on the device that reported the
- problem. In NCS 2.3 and later there is a backpointer attribute
- available on objects in the device tree that has been created by
- a Service. These backpointers are instance reference pointers
- that should be set in this list. Use one of the methods
- managed_object_string(), managed_object_oid() or
- managed_object_instance() to create the instances to populate
- this list.
-related_alarms
- References to other alarms that have been generated as a
- consequence of this alarm, or that has some other relationship
- to this alarm. Should be a list of AlarmId instances.
-root_cause_objects
- A list of Managed Objects that are likely to be the root cause
- of this alarm. This is different from the "Alarming Object". See
- managed_object above for details. Use one of the methods
- managed_object_string(), managed_object_oid() or
- managed_object_instance() to create the instances to populate
- this list.
-time_stamp
- A date-and-time when this alarm was generated.
-custom_attributes
- A list of custom leafs augmented into the alarm list.
-
-Members:
-
-
-
-add_attribute(...)
-
-Method:
-
-```python
-add_attribute(self, prefix, tag, value)
-```
-
-Add or update custom attribute
-
-
-
-
-
-add_status_attribute(...)
-
-Method:
-
-```python
-add_status_attribute(self, prefix, tag, value)
-```
-
-Add or update custom status change attribute
-
-
-
-
-
-alarm_id(...)
-
-Method:
-
-```python
-alarm_id(self)
-```
-
-Get the unique Id of this alarm as an AlarmId instance.
-
-
-
-
-
-get_key(...)
-
-Method:
-
-```python
-get_key(self)
-```
-
-Get alarm list key.
-
-
-
-
-
-key
-
-_Readonly property_
-
-Get alarm list key.
-
-
-
-### _class_ **AlarmId**
-
-Represents the unique Id of an Alarm.
-
-```python
-AlarmId(alarm_type, managed_device, managed_object, specific_problem=None)
-```
-
-Create an AlarmId.
-
-Members:
-
-_None_
-
-### _class_ **CustomAttribute**
-
-Class representing a custom attribute set on an alarm.
-
-```python
-CustomAttribute(prefix, tag, value)
-```
-
-Members:
-
-_None_
-
-### _class_ **CustomStatusAttribute**
-
-Class representing a custom attribute set on an alarm.
-
-```python
-CustomStatusAttribute(prefix, tag, value)
-```
-
-Members:
-
-_None_
-
diff --git a/developer-reference/pyapi/ncs.application.md b/developer-reference/pyapi/ncs.application.md
deleted file mode 100644
index d6e721f7..00000000
--- a/developer-reference/pyapi/ncs.application.md
+++ /dev/null
@@ -1,896 +0,0 @@
-# Python ncs.application Module
-
-Module for building NCS applications.
-
-## Functions
-
-### get_device
-
-```python
-get_device(node, name)
-```
-
-Get a device node by name.
-
-Returns a maagic node representing a device.
-
-Arguments:
-
-* node -- any maagic node with a Transaction backend or a Transaction object
-* name -- the device name (string)
-
-Returns:
-
-* device node (maagic.Node)
-
-### get_ned_id
-
-```python
-get_ned_id(device)
-```
-
-Get the ned-id of a device.
-
-Returns the ned-id as a string or None if not found.
-
-Arguments:
-
-* device -- a maagic node representing the device (maagic.Node)
-
-Returns:
-
-* ned_id (str)
-
-
-## Classes
-
-### _class_ **Application**
-
-Class for easy implementation of an NCS application.
-
-This class is intended to be sub-classed and used as a 'component class'
-inside an NCS package. It will be instantiated by NCS when the package
-is loaded. The setup() method should to be implemented to register
-service- and action callbacks. When NCS stops or an error occurs,
-teardown() will be called. A 'log' attribute is available for logging.
-
-Example application:
-
- from ncs.application import Application, Service, NanoService
- from ncs.dp import Action, ValidationPoint
-
- class FooService(Service):
- @Service.create
- def cb_create(self, tctx, root, service, proplist):
- # service code here
-
- class FooNanoService(NanoService):
- @NanoService.create
- def cb_nano_create(self, tctx, root, service, plan, component,
- state, proplist, compproplist):
- # service code here
-
- class FooAction(Action):
- @Action.action
- def cb_action(self, uinfo, name, kp, input, output):
- # action code here
-
- class FooValidation(ValidationPoint):
- @ValidationPoint.validate
- def cb_validate(self, tctx, keypath, value, validationpoint):
- # validation code here
-
- class MyApp(Application):
- def setup(self):
- self.log.debug('MyApp start')
- self.register_service('myservice-1', FooService)
- self.register_service('myservice-2', FooService, 'init_arg')
- self.register_nano_service('nano-1', 'myserv:router',
- 'myserv:ntp-initialized',
- FooNanoService)
- self.register_action('action-1', FooAction)
- self.register_validation('validation-1', FooValidation)
-
- def teardown(self):
- self.log.debug('MyApp finish')
-
-```python
-Application(*args, **kwds)
-```
-
-Initialize an Application object.
-
-Not designed to be instantiated directly; these objects are created
-by NCS.
-
-Members:
-
-
-
-APP_WORKER_STOP_TIMEOUT_S
-
-```python
-APP_WORKER_STOP_TIMEOUT_S = 1
-```
-
-
-
-
-
-
-add_running_thread(...)
-
-Method:
-
-```python
-add_running_thread(self, class_name)
-```
-
-
-
-
-
-
-create_daemon(...)
-
-Method:
-
-```python
-create_daemon(self, name=None)
-```
-
-Name the underlying dp.Daemon object (deprecated)
-
-
-
-
-
-critical(...)
-
-Method:
-
-```python
-critical(self, line)
-```
-
-
-
-
-
-
-debug(...)
-
-Method:
-
-```python
-debug(self, line)
-```
-
-
-
-
-
-
-del_running_thread(...)
-
-Method:
-
-```python
-del_running_thread(self, class_name)
-```
-
-
-
-
-
-
-error(...)
-
-Method:
-
-```python
-error(self, line)
-```
-
-
-
-
-
-
-exception(...)
-
-Method:
-
-```python
-exception(self, line)
-```
-
-
-
-
-
-
-info(...)
-
-Method:
-
-```python
-info(self, line)
-```
-
-
-
-
-
-
-reg_finish(...)
-
-Method:
-
-```python
-reg_finish(self, cbfun)
-```
-
-
-
-
-
-
-register_action(...)
-
-Method:
-
-```python
-register_action(self, actionpoint, action_cls, init_args=None)
-```
-
-Register an action callback class.
-
-Call this method to register 'action_cls' as the action callback
-class for action point 'actionpoint'. 'action_cls' should be a
-subclass of dp.Action. If the optional argument 'init_args' is
-supplied it will be passed in to the init() method of the subclass.
-
-Arguments:
-
-* actionpoint -- actionpoint (str)
-* action_cls -- action callback class
-* init_args -- initial arguments (optional)
-
-
-
-
-
-register_fun(...)
-
-Method:
-
-```python
-register_fun(self, start_fun, stop_fun)
-```
-
-Register custom start and stop functions.
-
-Call this method to register a start and stop function that
-will be called with a dp.Daemon.State during application
-setup.
-
-Example start and stop functions:
-
- def my_start_fun(state):
- state.log.info('my_start_fun START')
- return (state, time.time())
-
- def my_stop_fun(fun_data):
- (state, start_time) = fun_data
- state.log.info('my_start_fun started {}'.format(start_time))
- state.log.info('my_start_fun STOP')
-
-Arguments:
-
-* start_fun -- start function (fun)
-* stop_fun -- stop function (fun)
-
-
-
-
-
-register_nano_service(...)
-
-Method:
-
-```python
-register_nano_service(self, servicepoint, componenttype, state, nano_service_cls, init_args=None)
-```
-
-Register a nano service callback class.
-
-Call this method to register 'nano_service_cls' as the nano service
-callback class for service point 'servicepoint'.
-'nano service_cls' should be a subclass of NanoService.
-If the optional argument 'init_args' is supplied
-it will be passed in to the init() method of the subclass.
-
-Arguments:
-
-* servicepoint -- servicepoint (str)
-* componenttype -- nano plan component(str)
-* state -- nano plan state (str)
-* service_cls -- service callback class
-* init_args -- initial arguments (optional)
-
-
-
-
-
-register_service(...)
-
-Method:
-
-```python
-register_service(self, servicepoint, service_cls, init_args=None)
-```
-
-Register a service callback class.
-
-Call this method to register 'service_cls' as the service callback
-class for service point 'servicepoint'. 'service_cls' should be a
-subclass of Service. If the optional argument 'init_args' is supplied
-it will be passed in to the init() method of the subclass.
-
-Arguments:
-
-* servicepoint -- servicepoint (str)
-* service_cls -- service callback class
-* init_args -- initial arguments (optional)
-
-
-
-
-
-register_trans_cb(...)
-
-Method:
-
-```python
-register_trans_cb(self, trans_cb_cls)
-```
-
-Register a transaction callback class.
-
-If a custom transaction callback implementation is needed, call this
-method with the transaction callback class as the 'trans_cb_cls'
-argument.
-
-Arguments:
-
-* trans_cb_cls -- transaction callback class
-
-
-
-
-
-register_validation(...)
-
-Method:
-
-```python
-register_validation(self, validationpoint, validation_cls, init_args=None)
-```
-
-Register a validation callback class.
-
-Call this method to register 'validation_cls' as the
-validation callback class for validation point
-'validationpoint'. 'validation_cls' should be a subclass of
-ValidationPoint. If the optional argument 'init_args' is
-supplied it will be passed in to the init() method of the
-subclass.
-
-Arguments:
-
-* validationpoint -- validationpoint (str)
-* validation_cls -- validation callback class
-* init_args -- initial arguments (optional)
-
-
-
-
-
-set_log_level(...)
-
-Method:
-
-```python
-set_log_level(self, log_level)
-```
-
-Set log level for all workers (only relevant for
-_ProcessAppWorker)
-
-Arguments:
-
-* log_level -- logging level, using logging.Logger (int)
-
-
-
-
-
-set_self_assign_warning(...)
-
-Method:
-
-```python
-set_self_assign_warning(self, warning)
-```
-
-Set self assign warning for all workers.
-
-Arguments:
-
-* warning -- warning type (alarm, log, off). (string)
-
-
-
-
-
-setup(...)
-
-Method:
-
-```python
-setup(self)
-```
-
-Application setup method.
-
-Override this method to register actions and services. Any other
-initialization could also be done here. If the call to this method
-throws an exception the teardown method will be immediately called
-and the application shutdown.
-
-
-
-
-
-teardown(...)
-
-Method:
-
-```python
-teardown(self)
-```
-
-Application teardown method.
-
-Override this method to clean up custom resources allocated in
-setup().
-
-
-
-
-
-unreg_finish(...)
-
-Method:
-
-```python
-unreg_finish(self, cbfun)
-```
-
-
-
-
-
-
-warning(...)
-
-Method:
-
-```python
-warning(self, line)
-```
-
-
-
-
-### _class_ **NanoService**
-
-NanoService callback.
-
-This class makes it easy to create and register nano service callbacks by
-subclassing it and implementing some of the nano service callbacks.
-
-```python
-NanoService(daemon, servicepoint, componenttype, state, log=None, init_args=None)
-```
-
-Initialize this object.
-
-The 'daemon' argument should be a Daemon instance. 'servicepoint'
-is the name of the tailf:servicepoint to manage. Argument 'log' can
-be any log object, and if not set the Daemon log will be used.
-'init_args' may be any object that will be passed into init() when
-this object is constructed. Lastly, the low-level function
-dp.register_nano_service_cb() will be called.
-
-When creating a service callback using Application.register_nano_service
-there is no need to manually initialize this object as it is then
-done automatically.
-
-Members:
-
-
-
-create(...)
-
-Static method:
-
-```python
-create(fn)
-```
-
-Decorator for the cb_nano_create callback.
-
-Using this decorator alters the signature of the cb_create callback
-and passes in maagic.Node objects for root and service.
-The maagic.Node objects received in 'root' and 'service' are backed
-by a MAAPI connection with the FASTMAP handle attached. To update
-'proplist' simply return it from this function.
-
-Example of a decorated cb_create:
-
- @NanoService.create
- def cb_nano_create(self, tctx, root,
- service, plan, component, state,
- proplist, compproplist):
- pass
-
-Callback arguments:
-
-* tctx - transaction context (TransCtxRef)
-* root -- root node (maagic.Node)
-* service -- service node (maagic.Node)
-* plan -- current plan node (maagic.Node)
-* component -- plan component active for this invokation
-* state -- plan component state active for this invokation
-* proplist - properties (list(tuple(str, str)))
-* compproplist - component properties (list(tuple(str, str)))
-
-
-
-
-
-delete(...)
-
-Static method:
-
-```python
-delete(fn)
-```
-
-Decorator for the cb_nano_delete callback.
-
-Using this decorator alters the signature of the cb_delete callback
-and passes in maagic.Node objects for root and service.
-The maagic.Node objects received in 'root' and 'service' are backed
-by a MAAPI connection with the FASTMAP handle attached. To update
-'proplist' simply return it from this function.
-
-Example of a decorated cb_create:
-
- @NanoService.delete
- def cb_nano_delete(self, tctx, root,
- service, plan, component, state,
- proplist, compproplist):
- pass
-
-Callback arguments:
-
-* tctx - transaction context (TransCtxRef)
-* root -- root node (maagic.Node)
-* service -- service node (maagic.Node)
-* plan -- current plan node (maagic.Node)
-* component -- plan component active for this invokation
-* state -- plan component state active for this invokation
-* proplist - properties (list(tuple(str, str)))
-* compproplist - component properties (list(tuple(str, str)))
-
-
-
-
-
-init(...)
-
-Method:
-
-```python
-init(self, init_args)
-```
-
-Custom initialization.
-
-When registering a service using Application this method will be
-called with the 'init_args' passed into the register_service()
-function.
-
-
-
-
-
-maapi
-
-_Readonly property_
-
-
-
-
-
-
-start(...)
-
-Method:
-
-```python
-start(self)
-```
-
-Start NanoService
-
-
-
-
-
-stop(...)
-
-Method:
-
-```python
-stop(self)
-```
-
-Stop NanoService
-
-
-
-### _class_ **PlanComponent**
-
-Service plan component.
-
-The usage of this class is in conjunction with a service that
-uses a reactive FASTMAP pattern.
-With a plan the service states can be tracked and controlled.
-
-A service plan can consist of many PlanComponent's.
-This is operational data that is stored together with the service
-configuration.
-
-```python
-PlanComponent(planpath, name, component_type)
-```
-
-Initialize a PlanComponent.
-
-Members:
-
-
-
-append_state(...)
-
-Method:
-
-```python
-append_state(self, state_name)
-```
-
-Append a new state to this plan component.
-
-The state status will be initialized to 'ncs:not-reached'.
-
-
-
-
-
-set_failed(...)
-
-Method:
-
-```python
-set_failed(self, state_name)
-```
-
-Set state status to 'ncs:failed'.
-
-
-
-
-
-set_reached(...)
-
-Method:
-
-```python
-set_reached(self, state_name)
-```
-
-Set state status to 'ncs:reached'.
-
-
-
-
-
-set_status(...)
-
-Method:
-
-```python
-set_status(self, state_name, status)
-```
-
-Set state status.
-
-
-
-### _class_ **Service**
-
-Service callback.
-
-This class makes it easy to create and register service callbacks by
-subclassing it and implementing some of the service callbacks.
-
-```python
-Service(daemon, servicepoint, log=None, init_args=None)
-```
-
-Initialize this object.
-
-The 'daemon' argument should be a Daemon instance. 'servicepoint'
-is the name of the tailf:servicepoint to manage. Argument 'log' can
-be any log object, and if not set the Daemon log will be used.
-'init_args' may be any object that will be passed into init() when
-this object is constructed. Lastly, the low-level function
-dp.register_service_cb() will be called.
-
-When creating a service callback using Application.register_service
-there is no need to manually initialize this object as it is then
-done automatically.
-
-Members:
-
-
-
-create(...)
-
-Static method:
-
-```python
-create(fn)
-```
-
-Decorator for the cb_create callback.
-
-Using this decorator alters the signature of the cb_create callback
-and passes in maagic.Node objects for root and service.
-The maagic.Node objects received in 'root' and 'service' are backed
-by a MAAPI connection with the FASTMAP handle attached. To update
-'proplist' simply return it from this function.
-
-Example of a decorated cb_create:
-
- @Service.create
- def cb_create(self, tctx, root, service, proplist):
- pass
-
-Callback arguments:
-
-* tctx - transaction context (TransCtxRef)
-* root -- root node (maagic.Node)
-* service -- service node (maagic.Node)
-* proplist - properties (list(tuple(str, str)))
-
-
-
-
-
-init(...)
-
-Method:
-
-```python
-init(self, init_args)
-```
-
-Custom initialization.
-
-When registering a service using Application this method will be
-called with the 'init_args' passed into the register_service()
-function.
-
-
-
-
-
-maapi
-
-_Readonly property_
-
-
-
-
-
-
-post_modification(...)
-
-Static method:
-
-```python
-post_modification(fn)
-```
-
-Decorator for the cb_post_modification callback.
-
-For details see Service.pre_modification decorator.
-
-
-
-
-
-pre_modification(...)
-
-Static method:
-
-```python
-pre_modification(fn)
-```
-
-Decorator for the cb_pre_modification callback.
-
-Using this decorator alters the signature of the cb_pre_modification.
-callback and passes in a maagic.Node object for root.
-This method is invoked outside FASTMAP. To update 'proplist' simply
-return it from this function.
-
-Example of a decorated cb_pre_modification:
-
- @Service.pre_modification
- def cb_pre_modification(self, tctx, op, kp, root, proplist):
- pass
-
-Callback arguments:
-
-* tctx - transaction context (TransCtxRef)
-* op -- operation (int)
-* kp -- keypath (HKeypathRef)
-* root -- root node (maagic.Node)
-* proplist - properties (list(tuple(str, str)))
-
-
-
-
-
-start(...)
-
-Method:
-
-```python
-start(self)
-```
-
-Start Service
-
-
-
-
-
-stop(...)
-
-Method:
-
-```python
-stop(self)
-```
-
-Stop Service
-
-
-
diff --git a/developer-reference/pyapi/ncs.cdb.md b/developer-reference/pyapi/ncs.cdb.md
deleted file mode 100644
index 22c241a2..00000000
--- a/developer-reference/pyapi/ncs.cdb.md
+++ /dev/null
@@ -1,1000 +0,0 @@
-# Python ncs.cdb Module
-
-CDB high level module.
-
-This module implements a couple of classes for subscribing
-to CDB events.
-
-## Classes
-
-### _class_ **OperSubscriber**
-
-CDB Subscriber for oper data.
-
-Use this class when subscribing on operational data. In all other means
-the behavior is the same as for Subscriber().
-
-```python
-OperSubscriber(app=None, log=None, host='127.0.0.1', port=4569, path=None)
-```
-
-Initialize an OperSubscriber.
-
-Members:
-
-
-
-daemon
-
-A boolean value indicating whether this thread is a daemon thread.
-
-This must be set before start() is called, otherwise RuntimeError is
-raised. Its initial value is inherited from the creating thread; the
-main thread is not a daemon thread and therefore all threads created in
-the main thread default to daemon = False.
-
-The entire Python program exits when only daemon threads are left.
-
-
-
-
-
-getName(...)
-
-Method:
-
-```python
-getName(self)
-```
-
-Return a string used for identification purposes only.
-
-This method is deprecated, use the name attribute instead.
-
-
-
-
-
-ident
-
-_Readonly property_
-
-Thread identifier of this thread or None if it has not been started.
-
-This is a nonzero integer. See the get_ident() function. Thread
-identifiers may be recycled when a thread exits and another thread is
-created. The identifier is available even after the thread has exited.
-
-
-
-
-
-init(...)
-
-Method:
-
-```python
-init(self)
-```
-
-Custom initialization.
-
-Override this method to do custom initialization without needing
-to override __init__.
-
-
-
-
-
-isDaemon(...)
-
-Method:
-
-```python
-isDaemon(self)
-```
-
-Return whether this thread is a daemon.
-
-This method is deprecated, use the daemon attribute instead.
-
-
-
-
-
-is_alive(...)
-
-Method:
-
-```python
-is_alive(self)
-```
-
-Return whether the thread is alive.
-
-This method returns True just before the run() method starts until just
-after the run() method terminates. See also the module function
-enumerate().
-
-
-
-
-
-join(...)
-
-Method:
-
-```python
-join(self, timeout=None)
-```
-
-Wait until the thread terminates.
-
-This blocks the calling thread until the thread whose join() method is
-called terminates -- either normally or through an unhandled exception
-or until the optional timeout occurs.
-
-When the timeout argument is present and not None, it should be a
-floating point number specifying a timeout for the operation in seconds
-(or fractions thereof). As join() always returns None, you must call
-is_alive() after join() to decide whether a timeout happened -- if the
-thread is still alive, the join() call timed out.
-
-When the timeout argument is not present or None, the operation will
-block until the thread terminates.
-
-A thread can be join()ed many times.
-
-join() raises a RuntimeError if an attempt is made to join the current
-thread as that would cause a deadlock. It is also an error to join() a
-thread before it has been started and attempts to do so raises the same
-exception.
-
-
-
-
-
-name
-
-A string used for identification purposes only.
-
-It has no semantics. Multiple threads may be given the same name. The
-initial name is set by the constructor.
-
-
-
-
-
-native_id
-
-_Readonly property_
-
-Native integral thread ID of this thread, or None if it has not been started.
-
-This is a non-negative integer. See the get_native_id() function.
-This represents the Thread ID as reported by the kernel.
-
-
-
-
-
-register(...)
-
-Method:
-
-```python
-register(self, path, iter_obj=None, iter_flags=1, priority=0, flags=0, subtype=None)
-```
-
-Register an iterator object at a specific path.
-
-Setting 'iter_obj' to None will internally use 'self' as the iterator
-object which means that Subscriber needs to be sub-classed.
-
-Operational and configuration subscriptions can be done on the
-same Subscriber, but in that case the notifications may be
-arbitrarily interleaved, including operational notifications
-arriving between different configuration notifications for the
-same transaction. If this is a problem, use separate
-Subscriber instances for operational and configuration
-subscriptions.
-
-Arguments:
-
-* path -- path to node (str)
-* iter_object -- iterator object (obj, optional)
-* iter_flags -- iterator flags (int, optional)
-* priority -- priority order for subscribers (int)
-* flags -- additional subscriber flags (int)
-* subtype -- subscriber type SUB_RUNNING, SUB_RUNNING_TWOPHASE,
- SUB_OPERATIONAL (cdb)
-
-Returns:
-
-* subscription point (int)
-
-Flags (cdb):
-
-* SUB_WANT_ABORT_ON_ABORT
-
-Iterator Flags (ncs):
-
-* ITER_WANT_PREV
-* ITER_WANT_ANCESTOR_DELETE
-* ITER_WANT_ATTR
-* ITER_WANT_CLI_STR
-* ITER_WANT_SCHEMA_ORDER
-* ITER_WANT_LEAF_FIRST_ORDER
-* ITER_WANT_LEAF_LAST_ORDER
-* ITER_WANT_REVERSE
-* ITER_WANT_P_CONTAINER
-* ITER_WANT_CLI_ORDER
-
-
-
-
-
-run(...)
-
-Method:
-
-```python
-run(self)
-```
-
-Main processing loop.
-
-
-
-
-
-setDaemon(...)
-
-Method:
-
-```python
-setDaemon(self, daemonic)
-```
-
-Set whether this thread is a daemon.
-
-This method is deprecated, use the .daemon property instead.
-
-
-
-
-
-setName(...)
-
-Method:
-
-```python
-setName(self, name)
-```
-
-Set the name string for this thread.
-
-This method is deprecated, use the name attribute instead.
-
-
-
-
-
-start(...)
-
-Method:
-
-```python
-start(self)
-```
-
-Start the subscriber.
-
-
-
-
-
-stop(...)
-
-Method:
-
-```python
-stop(self)
-```
-
-Stop the subscriber.
-
-
-
-### _class_ **Subscriber**
-
-CDB Subscriber for config data.
-
-Supports the pattern of collecting changes and then handle the changes in
-a separate thread. For each subscription point a handler object must be
-registered. The following methods will be called on the handler:
-
-* pre_iterate() (optional)
-
- Called just before iteration starts, may return a state object
- which will be passed on to the iterate method. If not implemented,
- the state object will be None.
-
-* iterate(kp, op, oldv, newv, state) (mandatory)
-
- Called for each change in the change set.
-
-* post_iterate(state) (optional)
-
- Runs in a separate thread once iteration has finished and the
- subscription socket has been synced. Will receive the final state
- object from iterate() as an argument.
-
-* should_iterate() (optional)
-
- Called to check if the subscriber wants to iterate. If this method
- returns False, neither pre_iterate() nor iterate() will be called.
- Can e.g. be used by HA secondary nodes to skip iteration. If not
- implemented, pre_iterate() and iterate() will always be called.
-
-* should_post_iterate(state) (optional)
-
- Called to determine whether post_iterate() should be called
- or not. It is recommended to implement this method to prevent
- the subscriber from calling post_iterate() when not needed.
- Should return True if post_iterate() should run, otherwise False.
- If not implemented, post_iterate() will always be called.
-
-Example iterator object:
-
- class MyIter(object):
- def pre_iterate(self):
- return []
-
- def iterate(self, kp, op, oldv, newv, state):
- if op is ncs.MOP_VALUE_SET:
- state.append(newv)
- return ncs.ITER_RECURSE
-
- def post_iterate(self, state):
- for item in state:
- print(item)
-
- def should_post_iterate(self, state):
- return state != []
-
-The same handler may be registered for multiple subscription points.
-In that case, pre_iterate() will only be called once, followed by iterate
-calls for all subscription points, and finally a single call to
-post_iterate().
-
-```python
-Subscriber(app=None, log=None, host='127.0.0.1', port=4569, subtype=1, name='', path=None)
-```
-
-Initialize a Subscriber.
-
-Members:
-
-
-
-daemon
-
-A boolean value indicating whether this thread is a daemon thread.
-
-This must be set before start() is called, otherwise RuntimeError is
-raised. Its initial value is inherited from the creating thread; the
-main thread is not a daemon thread and therefore all threads created in
-the main thread default to daemon = False.
-
-The entire Python program exits when only daemon threads are left.
-
-
-
-
-
-getName(...)
-
-Method:
-
-```python
-getName(self)
-```
-
-Return a string used for identification purposes only.
-
-This method is deprecated, use the name attribute instead.
-
-
-
-
-
-ident
-
-_Readonly property_
-
-Thread identifier of this thread or None if it has not been started.
-
-This is a nonzero integer. See the get_ident() function. Thread
-identifiers may be recycled when a thread exits and another thread is
-created. The identifier is available even after the thread has exited.
-
-
-
-
-
-init(...)
-
-Method:
-
-```python
-init(self)
-```
-
-Custom initialization.
-
-Override this method to do custom initialization without needing
-to override __init__.
-
-
-
-
-
-isDaemon(...)
-
-Method:
-
-```python
-isDaemon(self)
-```
-
-Return whether this thread is a daemon.
-
-This method is deprecated, use the daemon attribute instead.
-
-
-
-
-
-is_alive(...)
-
-Method:
-
-```python
-is_alive(self)
-```
-
-Return whether the thread is alive.
-
-This method returns True just before the run() method starts until just
-after the run() method terminates. See also the module function
-enumerate().
-
-
-
-
-
-join(...)
-
-Method:
-
-```python
-join(self, timeout=None)
-```
-
-Wait until the thread terminates.
-
-This blocks the calling thread until the thread whose join() method is
-called terminates -- either normally or through an unhandled exception
-or until the optional timeout occurs.
-
-When the timeout argument is present and not None, it should be a
-floating point number specifying a timeout for the operation in seconds
-(or fractions thereof). As join() always returns None, you must call
-is_alive() after join() to decide whether a timeout happened -- if the
-thread is still alive, the join() call timed out.
-
-When the timeout argument is not present or None, the operation will
-block until the thread terminates.
-
-A thread can be join()ed many times.
-
-join() raises a RuntimeError if an attempt is made to join the current
-thread as that would cause a deadlock. It is also an error to join() a
-thread before it has been started and attempts to do so raises the same
-exception.
-
-
-
-
-
-name
-
-A string used for identification purposes only.
-
-It has no semantics. Multiple threads may be given the same name. The
-initial name is set by the constructor.
-
-
-
-
-
-native_id
-
-_Readonly property_
-
-Native integral thread ID of this thread, or None if it has not been started.
-
-This is a non-negative integer. See the get_native_id() function.
-This represents the Thread ID as reported by the kernel.
-
-
-
-
-
-register(...)
-
-Method:
-
-```python
-register(self, path, iter_obj=None, iter_flags=1, priority=0, flags=0, subtype=None)
-```
-
-Register an iterator object at a specific path.
-
-Setting 'iter_obj' to None will internally use 'self' as the iterator
-object which means that Subscriber needs to be sub-classed.
-
-Operational and configuration subscriptions can be done on the
-same Subscriber, but in that case the notifications may be
-arbitrarily interleaved, including operational notifications
-arriving between different configuration notifications for the
-same transaction. If this is a problem, use separate
-Subscriber instances for operational and configuration
-subscriptions.
-
-Arguments:
-
-* path -- path to node (str)
-* iter_object -- iterator object (obj, optional)
-* iter_flags -- iterator flags (int, optional)
-* priority -- priority order for subscribers (int)
-* flags -- additional subscriber flags (int)
-* subtype -- subscriber type SUB_RUNNING, SUB_RUNNING_TWOPHASE,
- SUB_OPERATIONAL (cdb)
-
-Returns:
-
-* subscription point (int)
-
-Flags (cdb):
-
-* SUB_WANT_ABORT_ON_ABORT
-
-Iterator Flags (ncs):
-
-* ITER_WANT_PREV
-* ITER_WANT_ANCESTOR_DELETE
-* ITER_WANT_ATTR
-* ITER_WANT_CLI_STR
-* ITER_WANT_SCHEMA_ORDER
-* ITER_WANT_LEAF_FIRST_ORDER
-* ITER_WANT_LEAF_LAST_ORDER
-* ITER_WANT_REVERSE
-* ITER_WANT_P_CONTAINER
-* ITER_WANT_CLI_ORDER
-
-
-
-
-
-run(...)
-
-Method:
-
-```python
-run(self)
-```
-
-Main processing loop.
-
-
-
-
-
-setDaemon(...)
-
-Method:
-
-```python
-setDaemon(self, daemonic)
-```
-
-Set whether this thread is a daemon.
-
-This method is deprecated, use the .daemon property instead.
-
-
-
-
-
-setName(...)
-
-Method:
-
-```python
-setName(self, name)
-```
-
-Set the name string for this thread.
-
-This method is deprecated, use the name attribute instead.
-
-
-
-
-
-start(...)
-
-Method:
-
-```python
-start(self)
-```
-
-Start the subscriber.
-
-
-
-
-
-stop(...)
-
-Method:
-
-```python
-stop(self)
-```
-
-Stop the subscriber.
-
-
-
-### _class_ **TwoPhaseSubscriber**
-
-CDB Subscriber for config data with support for aborting transactions.
-
-Subscriber that is capable of aborting transactions during the
-prepare phase of a transaction.
-
-The following methods will be called on the handler in addition to
-the methods described in Subscriber:
-
-* prepare(kp, op, oldv, newv, state) (mandatory)
-
- Called in the transaction prepare phase. If an exception occurs
- during the invocation of prepare the transaction is aborted.
-
-* cleanup(state) (optional)
-
- Called after a prepare failure if available. Use to cleanup
- resources allocated by prepare.
-
-* abort(kp, op, oldv, newv, state) (mandatory)
-
- Called if another subscriber aborts the transaction and this
- transaction has been prepared.
-
-Methods are called in the following order:
-
-1. should_iterate -> pre_iterate ( -> cleanup, on exception)
-2. should_iterate -> iterate -> post_iterate
-3. should_iterate -> abort, if transaction is aborted by other subscriber
-
-```python
-TwoPhaseSubscriber(name, app=None, log=None, host='127.0.0.1', port=4569, path=None)
-```
-
-Members:
-
-
-
-daemon
-
-A boolean value indicating whether this thread is a daemon thread.
-
-This must be set before start() is called, otherwise RuntimeError is
-raised. Its initial value is inherited from the creating thread; the
-main thread is not a daemon thread and therefore all threads created in
-the main thread default to daemon = False.
-
-The entire Python program exits when only daemon threads are left.
-
-
-
-
-
-getName(...)
-
-Method:
-
-```python
-getName(self)
-```
-
-Return a string used for identification purposes only.
-
-This method is deprecated, use the name attribute instead.
-
-
-
-
-
-ident
-
-_Readonly property_
-
-Thread identifier of this thread or None if it has not been started.
-
-This is a nonzero integer. See the get_ident() function. Thread
-identifiers may be recycled when a thread exits and another thread is
-created. The identifier is available even after the thread has exited.
-
-
-
-
-
-init(...)
-
-Method:
-
-```python
-init(self)
-```
-
-Custom initialization.
-
-Override this method to do custom initialization without needing
-to override __init__.
-
-
-
-
-
-isDaemon(...)
-
-Method:
-
-```python
-isDaemon(self)
-```
-
-Return whether this thread is a daemon.
-
-This method is deprecated, use the daemon attribute instead.
-
-
-
-
-
-is_alive(...)
-
-Method:
-
-```python
-is_alive(self)
-```
-
-Return whether the thread is alive.
-
-This method returns True just before the run() method starts until just
-after the run() method terminates. See also the module function
-enumerate().
-
-
-
-
-
-join(...)
-
-Method:
-
-```python
-join(self, timeout=None)
-```
-
-Wait until the thread terminates.
-
-This blocks the calling thread until the thread whose join() method is
-called terminates -- either normally or through an unhandled exception
-or until the optional timeout occurs.
-
-When the timeout argument is present and not None, it should be a
-floating point number specifying a timeout for the operation in seconds
-(or fractions thereof). As join() always returns None, you must call
-is_alive() after join() to decide whether a timeout happened -- if the
-thread is still alive, the join() call timed out.
-
-When the timeout argument is not present or None, the operation will
-block until the thread terminates.
-
-A thread can be join()ed many times.
-
-join() raises a RuntimeError if an attempt is made to join the current
-thread as that would cause a deadlock. It is also an error to join() a
-thread before it has been started and attempts to do so raises the same
-exception.
-
-
-
-
-
-name
-
-A string used for identification purposes only.
-
-It has no semantics. Multiple threads may be given the same name. The
-initial name is set by the constructor.
-
-
-
-
-
-native_id
-
-_Readonly property_
-
-Native integral thread ID of this thread, or None if it has not been started.
-
-This is a non-negative integer. See the get_native_id() function.
-This represents the Thread ID as reported by the kernel.
-
-
-
-
-
-register(...)
-
-Method:
-
-```python
-register(self, path, iter_obj=None, iter_flags=1, priority=0, flags=0, subtype=None)
-```
-
-Register an iterator object at a specific path.
-
-Setting 'iter_obj' to None will internally use 'self' as the iterator
-object which means that TwoPhaseSubscriber needs to be sub-classed.
-
-Operational and configuration subscriptions can be done on the
-same TwoPhaseSubscriber, but in that case the notifications may be
-arbitrarily interleaved, including operational notifications
-arriving between different configuration notifications for the
-same transaction. If this is a problem, use separate
-TwoPhaseSubscriber instances for operational and configuration
-subscriptions.
-
-For arguments and flags, see Subscriber.register()
-
-
-
-
-
-run(...)
-
-Method:
-
-```python
-run(self)
-```
-
-Main processing loop.
-
-
-
-
-
-setDaemon(...)
-
-Method:
-
-```python
-setDaemon(self, daemonic)
-```
-
-Set whether this thread is a daemon.
-
-This method is deprecated, use the .daemon property instead.
-
-
-
-
-
-setName(...)
-
-Method:
-
-```python
-setName(self, name)
-```
-
-Set the name string for this thread.
-
-This method is deprecated, use the name attribute instead.
-
-
-
-
-
-start(...)
-
-Method:
-
-```python
-start(self)
-```
-
-Start the subscriber.
-
-
-
-
-
-stop(...)
-
-Method:
-
-```python
-stop(self)
-```
-
-Stop the subscriber.
-
-
-
-## Predefined Values
-
-```python
-
-A_CDB = 1
-DATA_SOCKET = 2
-DONE_OPERATIONAL = 4
-DONE_PRIORITY = 1
-DONE_SOCKET = 2
-DONE_TRANSACTION = 3
-FLAG_INIT = 1
-FLAG_UPGRADE = 2
-GET_MODS_CLI_NO_BACKQUOTES = 8
-GET_MODS_INCLUDE_LISTS = 1
-GET_MODS_INCLUDE_MOVES = 16
-GET_MODS_REVERSE = 2
-GET_MODS_SUPPRESS_DEFAULTS = 4
-GET_MODS_WANT_ANCESTOR_DELETE = 32
-LOCK_PARTIAL = 8
-LOCK_REQUEST = 4
-LOCK_SESSION = 2
-LOCK_WAIT = 1
-OPERATIONAL = 3
-O_CDB = 2
-PRE_COMMIT_RUNNING = 4
-READ_COMMITTED = 16
-READ_SOCKET = 0
-RUNNING = 1
-STARTUP = 2
-SUBSCRIPTION_SOCKET = 1
-SUB_ABORT = 3
-SUB_COMMIT = 2
-SUB_FLAG_HA_IS_SECONDARY = 16
-SUB_FLAG_HA_IS_SLAVE = 16
-SUB_FLAG_HA_SYNC = 8
-SUB_FLAG_IS_LAST = 1
-SUB_FLAG_REVERT = 4
-SUB_FLAG_TRIGGER = 2
-SUB_OPER = 4
-SUB_OPERATIONAL = 3
-SUB_PREPARE = 1
-SUB_RUNNING = 1
-SUB_RUNNING_TWOPHASE = 2
-SUB_WANT_ABORT_ON_ABORT = 1
-S_CDB = 3
-```
diff --git a/developer-reference/pyapi/ncs.dp.md b/developer-reference/pyapi/ncs.dp.md
deleted file mode 100644
index 99100623..00000000
--- a/developer-reference/pyapi/ncs.dp.md
+++ /dev/null
@@ -1,1241 +0,0 @@
-# Python ncs.dp Module
-
-Callback module for connecting data providers to ConfD/NCS.
-
-## Functions
-
-### return_worker_socket
-
-```python
-return_worker_socket(state, key)
-```
-
-Return worker socket associated with a worker thread from Daemon/state.
-
-Return worker socket to pool.
-
-### take_worker_socket
-
-```python
-take_worker_socket(state, name, key=None)
-```
-
-Take worker socket associated with a worker thread from Daemon/state.
-
-Take worker socket from pool, must be returned with
-dp.return_worker_socket after use.
-
-
-## Classes
-
-### _class_ **Action**
-
-Action callback.
-
-This class makes it easy to create and register action callbacks by
-sub-classing it and implementing cb_action in the derived class.
-
-```python
-Action(daemon, actionpoint, log=None, init_args=None)
-```
-
-Initialize this object.
-
-The 'daemon' argument should be a Daemon instance. 'actionpoint'
-is the name of the tailf:actionpoint to manage. 'log' can be any
-log object, and if not set the Daemon logger will be used.
-'init_args' may be any object that will be passed into init()
-when this object is constructed. Lastly, the low-level function
-dp.register_action_cbs() will be called.
-
-When using this class together with ncs.application.Application
-there is no need to manually initialize this object as it is
-done by the Application.register_action() method.
-
-Arguments:
-
-* daemon -- Daemon instance (dp.Daemon)
-* actionpoint -- actionpoint name (str)
-* log -- logging object (optional)
-* init_args -- additional arguments (optional)
-
-Members:
-
-
-
-action(...)
-
-Static method:
-
-```python
-action(fn)
-```
-
-Decorator for the cb_action callback.
-
-Only use this decorator for actions of tailf:action type.
-
-Using this decorator alters the signature of the cb_action callback
-and passes in maagic.Node objects for input and output action data.
-
-Example of a decorated cb_action:
-
- @Action.action
- def cb_action(self, uinfo, name, kp, input, output, trans):
- pass
-
-Callback arguments:
-
-* uinfo -- a UserInfo object
-* name -- the tailf:action name (string)
-* kp -- the keypath of the action (HKeypathRef)
-* input -- input node (maagic.Node)
-* output -- output node (maagic.Node)
-* trans -- read only transaction, same as action transaction if
- executed with an action context (maapi.Transaction)
-
-
-
-
-
-cb_init(...)
-
-Method:
-
-```python
-cb_init(self, uinfo)
-```
-
-The cb_init callback must always be implemented.
-
-This default implementation will associate a new worker socket
-with this callback.
-
-
-
-
-
-init(...)
-
-Method:
-
-```python
-init(self, init_args)
-```
-
-Custom initialization.
-
-When registering an action using ncs.application.Application this
-method will be called with the 'init_args' passed into the
-register_action() function.
-
-
-
-
-
-rpc(...)
-
-Static method:
-
-```python
-rpc(fn)
-```
-
-Decorator for the cb_action callback.
-
-Only use this decorator for rpc:s.
-
-Using this decorator alters the signature of the cb_action callback
-and passes in maagic.Node objects for input and output action data.
-
-Example of a decorated cb_action:
-
- @Action.rpc
- def cb_action(self, uinfo, name, input, output):
- pass
-
-Callback arguments:
-
-* uinfo -- a UserInfo object
-* name -- the rpc name (string)
-* input -- input node (maagic.Node)
-* output -- output node (maagic.Node)
-
-
-
-
-
-start(...)
-
-Method:
-
-```python
-start(self)
-```
-
-Custom actionpoint start triggered when Python VM starts up.
-
-
-
-
-
-stop(...)
-
-Method:
-
-```python
-stop(self)
-```
-
-Custom actionpoint stop triggered when Python VM shuts down.
-
-
-
-### _class_ **Daemon**
-
-Manage a data provider connection towards ConfD/NCS.
-
-```python
-Daemon(name, log=None, ip='127.0.0.1', port=4569, path=None, state_mgr=None)
-```
-
-Initialize a Daemon object.
-
-The 'name' argument should be unique. It will show up in the
-CLI and in error messages. All other arguments are optional.
-Argument 'log' can be any log object, and if not set the standard
-logging mechanism will be used. Set 'ip' and 'port' to
-where your Confd/NCS server is. 'path' is a filename to a unix
-domain socket to be used in place of 'ip' and 'port'. If 'path'
-is provided, 'ip' and 'port' arguments are ignored.
-
-Daemon supports automatic restarting in case of error if a
-state manager is provided using the state_mgr parameter.
-
-Members:
-
-
-
-INIT_RETRY_INTERVAL_S
-
-```python
-INIT_RETRY_INTERVAL_S = 1
-```
-
-
-
-
-
-
-ctx(...)
-
-Method:
-
-```python
-ctx(self)
-```
-
-Return the daemon context.
-
-
-
-
-
-daemon
-
-A boolean value indicating whether this thread is a daemon thread.
-
-This must be set before start() is called, otherwise RuntimeError is
-raised. Its initial value is inherited from the creating thread; the
-main thread is not a daemon thread and therefore all threads created in
-the main thread default to daemon = False.
-
-The entire Python program exits when only daemon threads are left.
-
-
-
-
-
-finish(...)
-
-Method:
-
-```python
-finish(self)
-```
-
-Stop the daemon thread.
-
-
-
-
-
-getName(...)
-
-Method:
-
-```python
-getName(self)
-```
-
-Return a string used for identification purposes only.
-
-This method is deprecated, use the name attribute instead.
-
-
-
-
-
-ident
-
-_Readonly property_
-
-Thread identifier of this thread or None if it has not been started.
-
-This is a nonzero integer. See the get_ident() function. Thread
-identifiers may be recycled when a thread exits and another thread is
-created. The identifier is available even after the thread has exited.
-
-
-
-
-
-ip(...)
-
-Method:
-
-```python
-ip(self)
-```
-
-Return the ip address.
-
-
-
-
-
-isDaemon(...)
-
-Method:
-
-```python
-isDaemon(self)
-```
-
-Return whether this thread is a daemon.
-
-This method is deprecated, use the daemon attribute instead.
-
-
-
-
-
-is_alive(...)
-
-Method:
-
-```python
-is_alive(self)
-```
-
-Return whether the thread is alive.
-
-This method returns True just before the run() method starts until just
-after the run() method terminates. See also the module function
-enumerate().
-
-
-
-
-
-join(...)
-
-Method:
-
-```python
-join(self, timeout=None)
-```
-
-Wait until the thread terminates.
-
-This blocks the calling thread until the thread whose join() method is
-called terminates -- either normally or through an unhandled exception
-or until the optional timeout occurs.
-
-When the timeout argument is present and not None, it should be a
-floating point number specifying a timeout for the operation in seconds
-(or fractions thereof). As join() always returns None, you must call
-is_alive() after join() to decide whether a timeout happened -- if the
-thread is still alive, the join() call timed out.
-
-When the timeout argument is not present or None, the operation will
-block until the thread terminates.
-
-A thread can be join()ed many times.
-
-join() raises a RuntimeError if an attempt is made to join the current
-thread as that would cause a deadlock. It is also an error to join() a
-thread before it has been started and attempts to do so raises the same
-exception.
-
-
-
-
-
-load_schemas(...)
-
-Method:
-
-```python
-load_schemas(self)
-```
-
-Load schema information into the process memory.
-
-
-
-
-
-name
-
-A string used for identification purposes only.
-
-It has no semantics. Multiple threads may be given the same name. The
-initial name is set by the constructor.
-
-
-
-
-
-native_id
-
-_Readonly property_
-
-Native integral thread ID of this thread, or None if it has not been started.
-
-This is a non-negative integer. See the get_native_id() function.
-This represents the Thread ID as reported by the kernel.
-
-
-
-
-
-path(...)
-
-Method:
-
-```python
-path(self)
-```
-
-Return the unix domain socket path.
-
-
-
-
-
-port(...)
-
-Method:
-
-```python
-port(self)
-```
-
-Return the port.
-
-
-
-
-
-register_trans_cb(...)
-
-Method:
-
-```python
-register_trans_cb(self, trans_cb_cls=)
-```
-
-Register a transaction callback class.
-
-It's not necessary to call this method. Only do that if a custom
-transaction callback will be used.
-
-
-
-
-
-register_trans_validate_cb(...)
-
-Method:
-
-```python
-register_trans_validate_cb(self, trans_validate_cb_cls=)
-```
-
-Register a transaction validation callback class.
-
-It's not necessary to call this method. Only do that if a custom
-transaction callback will be used.
-
-
-
-
-
-run(...)
-
-Method:
-
-```python
-run(self)
-```
-
-Daemon thread processing loop.
-
-Don't call this method explicitly. It handles reading of control
-and worker sockets and notifying ConfD/NCS that it should continue
-processing by calling the low-level function dp.fd_ready().
-If the connection towards ConfD/NCS is broken or if finish() is
-explicitly called, this function (and the thread) will end.
-
-
-
-
-
-setDaemon(...)
-
-Method:
-
-```python
-setDaemon(self, daemonic)
-```
-
-Set whether this thread is a daemon.
-
-This method is deprecated, use the .daemon property instead.
-
-
-
-
-
-setName(...)
-
-Method:
-
-```python
-setName(self, name)
-```
-
-Set the name string for this thread.
-
-This method is deprecated, use the name attribute instead.
-
-
-
-
-
-start(...)
-
-Method:
-
-```python
-start(self)
-```
-
-Start daemon work thread.
-
-After registering any callbacks (action, services and such), call
-this function to start processing. The low-level function
-dp.register_done() will be called before the thread is started.
-
-
-
-
-
-wsock
-
-_Readonly property_
-
-
-
-
-### _class_ **StateManager**
-
-Base class for state managers used with Daemon
-
-```python
-StateManager(log)
-```
-
-Members:
-
-
-
-setup(...)
-
-Method:
-
-```python
-setup(self, state, previous_state)
-```
-
-Not Implemented.
-
-
-
-
-
-teardown(...)
-
-Method:
-
-```python
-teardown(self, state, finished)
-```
-
-Not Implemented.
-
-
-
-### _class_ **TransValidateCallback**
-
-Default transaction validation callback implementation class.
-
-When registering validation points in ConfD/NCS a transaction
-validation callback handler must be provided. This class is a
-generic implementation of such a handler. It implements the
-required callbacks 'cb_init' and 'cb_stop'.
-
-```python
-TransValidateCallback(state)
-```
-
-Initialize a TransValidateCallback object.
-
-The argument 'state' is the dict representation of a daemon.
-
-Members:
-
-
-
-cb_init(...)
-
-Method:
-
-```python
-cb_init(self, tctx)
-```
-
-The cb_init callback must always be implemented.
-
-It is required to prepare for future validation
-callbacks. This default implementation allocates a worker
-thread and socket pair and associates it with the transaction.
-
-
-
-
-
-cb_stop(...)
-
-Method:
-
-```python
-cb_stop(self, tctx)
-```
-
-The cb_stop callback must always be implemented.
-
-Clean up resources previously allocated in the cb_init
-callback. This default implementation returnes the worker
-thread and socket pair to the pool of workers.
-
-
-
-### _class_ **TransactionCallback**
-
-Default transaction callback implementation class.
-
-When connecting data providers to ConfD/NCS a transaction callback
-handler must be provided. This class is a generic implementation of
-such a handler. It implements the only required callback 'cb_init'.
-
-```python
-TransactionCallback(state)
-```
-
-Initialize a TransactionCallback object.
-
-The argument 'wsock' is the connected worker socket and 'log'
-is a log object.
-
-Members:
-
-
-
-cb_finish(...)
-
-Method:
-
-```python
-cb_finish(self, tctx)
-```
-
-The cb_finish callback of TransactionCallback.
-
-This implementation returns worker socket associated with a
-worker thread from Daemon/state.
-
-
-
-
-
-cb_init(...)
-
-Method:
-
-```python
-cb_init(self, tctx)
-```
-
-The cb_init callback must always be implemented.
-
-It is required to prepare for future read/write operations towards
-the data source. This default implementation associates a worker
-socket with a transaction.
-
-
-
-### _class_ **ValidationError**
-
-Exception raised to indicate a failed validation
-
-
-```python
-ValidationError(message)
-```
-
-Members:
-
-
-
-add_note(...)
-
-Method:
-
-Exception.add_note(note) --
-add a note to the exception
-
-
-
-
-
-args
-
-
-
-
-
-
-with_traceback(...)
-
-Method:
-
-Exception.with_traceback(tb) --
-set self.__traceback__ to tb and return self.
-
-
-
-### _class_ **ValidationPoint**
-
-Validation Point callback.
-
-This class makes it easy to create and register validation point
-callbacks by subclassing it and implementing cb_validate with the
-@validate or @validate_with_trans decorator.
-
-```python
-ValidationPoint(daemon, validationpoint, log=None, init_args=None)
-```
-
-Members:
-
-
-
-init(...)
-
-Method:
-
-```python
-init(self, init_args)
-```
-
-Custom initialization.
-
-When registering a validation point using
-ncs.application.Application this method will be called with
-the 'init_args' passed into the register_validation()
-function.
-
-
-
-
-
-start(...)
-
-Method:
-
-```python
-start(self)
-```
-
-Start ValidationPoint
-
-
-
-
-
-stop(...)
-
-Method:
-
-```python
-stop(self)
-```
-
-Stop ValidationPoint
-
-
-
-
-
-validate(...)
-
-Static method:
-
-```python
-validate(fn)
-```
-
-Decorator for the cb_validate callback.
-
-Using this decorator alters the signature of the cb_validate
-callback and passes in the validationpoint as the last
-argument.
-
-In addition it logs unhandled exceptions, handles
-ValidationError exception setting the transaction error and
-returns _tm.CONFD_ERR.
-
-Example of a decorated cb_validate:
-
- @ValidationPoint.validate
- def cb_validate(self, tctx, keypath, value, validationpoint):
- pass
-
-Callback arguments:
-
-* tctx - transaction context (TransCtxRef)
-* kp -- path to the node being validated (HKeypathRef)
-* value -- new value of keypath (Value)
-* validationpoint - name of the validation point (str)
-
-
-
-
-
-validate_with_trans(...)
-
-Static method:
-
-```python
-validate_with_trans(fn)
-```
-
-Decorator for the cb_validate callback.
-
-Using this decorator alters the signature of the cb_validate
-callback and passes in root node attached to the transaction
-being validated and the validationpoint as the last argument.
-
-In addition it logs unhandled exceptions, handles
-ValidationError exception setting the transaction error and
-returns _tm.CONFD_ERR.
-
-Example of a decorated cb_validate:
-
- @ValidationPoint.validate_with_trans
- def cb_validate(self, tctx, root, kp, value, validationpoint):
- pass
-
-Callback arguments:
-
-* tctx - transaction context (TransCtxRef)
-* root -- root node (maagic.Root)
-* kp -- path to the node being validated (HKeypathRef)
-* value -- new value of keypath (Value)
-* validationpoint - name of the validation point (str)
-
-
-
-## Predefined Values
-
-```python
-
-ACCESS_CHK_DESCENDANT = 1024
-ACCESS_CHK_FINAL = 512
-ACCESS_CHK_INTERMEDIATE = 256
-ACCESS_OP_CREATE = 4
-ACCESS_OP_DELETE = 16
-ACCESS_OP_EXECUTE = 2
-ACCESS_OP_READ = 1
-ACCESS_OP_UPDATE = 8
-ACCESS_OP_WRITE = 32
-ACCESS_RESULT_ACCEPT = 0
-ACCESS_RESULT_CONTINUE = 2
-ACCESS_RESULT_DEFAULT = 3
-ACCESS_RESULT_REJECT = 1
-BAD_VALUE_BAD_KEY_TAG = 32
-BAD_VALUE_BAD_LEXICAL = 19
-BAD_VALUE_BAD_TAG = 21
-BAD_VALUE_BAD_VALUE = 20
-BAD_VALUE_CUSTOM_FACET_ERROR_MESSAGE = 16
-BAD_VALUE_ENUMERATION = 11
-BAD_VALUE_FRACTION_DIGITS = 3
-BAD_VALUE_INVALID_FACET = 18
-BAD_VALUE_INVALID_REGEX = 9
-BAD_VALUE_INVALID_TYPE_NAME = 23
-BAD_VALUE_INVALID_UTF8 = 38
-BAD_VALUE_INVALID_XPATH = 34
-BAD_VALUE_INVALID_XPATH_AT_TAG = 40
-BAD_VALUE_INVALID_XPATH_PATH = 39
-BAD_VALUE_LENGTH = 15
-BAD_VALUE_MAX_EXCLUSIVE = 5
-BAD_VALUE_MAX_INCLUSIVE = 6
-BAD_VALUE_MAX_LENGTH = 14
-BAD_VALUE_MIN_EXCLUSIVE = 7
-BAD_VALUE_MIN_INCLUSIVE = 8
-BAD_VALUE_MIN_LENGTH = 13
-BAD_VALUE_MISSING_KEY = 37
-BAD_VALUE_MISSING_NAMESPACE = 27
-BAD_VALUE_NOT_RESTRICTED_XPATH = 35
-BAD_VALUE_NO_DEFAULT_NAMESPACE = 24
-BAD_VALUE_PATTERN = 12
-BAD_VALUE_POP_TOO_FAR = 31
-BAD_VALUE_RANGE = 29
-BAD_VALUE_STRING_FUN = 1
-BAD_VALUE_SYMLINK_BAD_KEY_REFERENCE = 33
-BAD_VALUE_TOTAL_DIGITS = 4
-BAD_VALUE_UNIQUELIST = 10
-BAD_VALUE_UNKNOWN_BIT_LABEL = 22
-BAD_VALUE_UNKNOWN_NAMESPACE = 26
-BAD_VALUE_UNKNOWN_NAMESPACE_PREFIX = 25
-BAD_VALUE_USER_ERROR = 17
-BAD_VALUE_VALUE2VALUE_FUN = 28
-BAD_VALUE_WRONG_DECIMAL64_FRACTION_DIGITS = 2
-BAD_VALUE_WRONG_NUMBER_IDENTIFIERS = 30
-BAD_VALUE_XPATH_ERROR = 36
-CLI_ACTION_NOT_FOUND = 13
-CLI_AMBIGUOUS_COMMAND = 63
-CLI_BAD_ACTION_RESPONSE = 16
-CLI_BAD_LEAF_VALUE = 6
-CLI_CDM_NOT_SUPPORTED = 74
-CLI_COMMAND_ABORTED = 2
-CLI_COMMAND_ERROR = 1
-CLI_COMMAND_FAILED = 3
-CLI_CONFIRMED_NOT_SUPPORTED = 39
-CLI_COPY_CONFIG_FAILED = 32
-CLI_COPY_FAILED = 31
-CLI_COPY_PATH_IDENTICAL = 33
-CLI_CREATE_PATH = 23
-CLI_CUSTOM_ERROR = 4
-CLI_DELETE_ALL_FAILED = 10
-CLI_DELETE_ERROR = 12
-CLI_DELETE_FAILED = 11
-CLI_ELEMENT_DOES_NOT_EXIST = 66
-CLI_ELEMENT_MANDATORY = 75
-CLI_ELEMENT_NOT_FOUND = 14
-CLI_ELEM_NOT_WRITABLE = 7
-CLI_EXPECTED_BOL = 56
-CLI_EXPECTED_EOL = 57
-CLI_FAILED_COPY_RUNNING = 38
-CLI_FAILED_CREATE_CONTEXT = 37
-CLI_FAILED_OPEN_STARTUP = 41
-CLI_FAILED_OPEN_STARTUP_CONFIG = 42
-CLI_FAILED_TERM_REDIRECT = 49
-CLI_ILLEGAL_DIRECTORY_NAME = 52
-CLI_ILLEGAL_FILENAME = 53
-CLI_INCOMPLETE_CMD_PATH = 67
-CLI_INCOMPLETE_COMMAND = 9
-CLI_INCOMPLETE_PATH = 8
-CLI_INCOMPLETE_PATTERN = 64
-CLI_INVALID_PARAMETER = 54
-CLI_INVALID_PASSWORD = 21
-CLI_INVALID_PATH = 58
-CLI_INVALID_ROLLBACK_NR = 15
-CLI_INVALID_SELECT = 59
-CLI_MESSAGE_TOO_LARGE = 48
-CLI_MISSING_ACTION_PARAM = 17
-CLI_MISSING_ACTION_PARAM_VALUE = 18
-CLI_MISSING_ARGUMENT = 69
-CLI_MISSING_DISPLAY_GROUP = 55
-CLI_MISSING_ELEMENT = 65
-CLI_MISSING_VALUE = 68
-CLI_MOVE_FAILED = 30
-CLI_MUST_BE_AN_INTEGER = 70
-CLI_MUST_BE_INTEGER = 43
-CLI_MUST_BE_TRUE_OR_FALSE = 71
-CLI_NOT_ALLOWED = 5
-CLI_NOT_A_DIRECTORY = 50
-CLI_NOT_A_FILE = 51
-CLI_NOT_FOUND = 28
-CLI_NOT_SUPPORTED = 34
-CLI_NOT_WRITABLE = 27
-CLI_NO_SUCH_ELEMENT = 45
-CLI_NO_SUCH_SESSION = 44
-CLI_NO_SUCH_USER = 47
-CLI_ON_LINE = 25
-CLI_ON_LINE_DESC = 26
-CLI_OPEN_FILE = 20
-CLI_READ_ERROR = 19
-CLI_REALLOCATE = 24
-CLI_SENSITIVE_DATA = 73
-CLI_SET_FAILED = 29
-CLI_START_REPLAY_FAILED = 72
-CLI_TARGET_EXISTS = 35
-CLI_UNKNOWN_ARGUMENT = 61
-CLI_UNKNOWN_COMMAND = 62
-CLI_UNKNOWN_ELEMENT = 60
-CLI_UNKNOWN_HIDEGROUP = 22
-CLI_UNKNOWN_MODE = 36
-CLI_WILDCARD_NOT_ALLOWED = 46
-CLI_WRITE_CONFIG_FAILED = 40
-COMPLETION = 0
-COMPLETION_DEFAULT = 3
-COMPLETION_DESC = 2
-COMPLETION_INFO = 1
-CONTROL_SOCKET = 0
-C_CREATE = 2
-C_MOVE_AFTER = 6
-C_REMOVE = 3
-C_SET_ATTR = 5
-C_SET_CASE = 4
-C_SET_ELEM = 1
-DAEMON_FLAG_BULK_GET_CONTAINER = 128
-DAEMON_FLAG_NO_DEFAULTS = 4
-DAEMON_FLAG_PREFER_BULK_GET = 64
-DAEMON_FLAG_REG_DONE = 65536
-DAEMON_FLAG_REG_REPLACE_DISCONNECT = 16
-DAEMON_FLAG_SEND_IKP = 1
-DAEMON_FLAG_STRINGSONLY = 2
-DATA_AFTER = 1
-DATA_BEFORE = 0
-DATA_CREATE = 0
-DATA_DELETE = 1
-DATA_FIRST = 2
-DATA_INSERT = 2
-DATA_LAST = 3
-DATA_MERGE = 3
-DATA_MOVE = 4
-DATA_REMOVE = 6
-DATA_REPLACE = 5
-DATA_WANT_FILTER = 1
-ERRTYPE_BAD_VALUE = 2
-ERRTYPE_CLI = 4
-ERRTYPE_MISC = 8
-ERRTYPE_NCS = 16
-ERRTYPE_OPERATION = 32
-ERRTYPE_VALIDATION = 1
-MISC_ACCESS_DENIED = 5
-MISC_APPLICATION = 19
-MISC_APPLICATION_INTERNAL = 20
-MISC_BAD_PERSIST_ID = 16
-MISC_CANDIDATE_ABORT_BAD_USID = 17
-MISC_CDB_OPER_UNAVAILABLE = 37
-MISC_DATA_MISSING = 44
-MISC_EXTERNAL = 22
-MISC_EXTERNAL_TIMEOUT = 45
-MISC_FILE_ACCESS_PATH = 33
-MISC_FILE_BAD_PATH = 34
-MISC_FILE_BAD_VALUE = 35
-MISC_FILE_CORRUPT = 52
-MISC_FILE_CREATE_PATH = 29
-MISC_FILE_DELETE_PATH = 32
-MISC_FILE_EOF = 36
-MISC_FILE_MOVE_PATH = 30
-MISC_FILE_OPEN_ERROR = 27
-MISC_FILE_SET_PATH = 31
-MISC_FILE_SYNTAX_ERROR = 28
-MISC_FILE_SYNTAX_ERROR_1 = 26
-MISC_HA_ABORT = 55
-MISC_INCONSISTENT_VALUE = 7
-MISC_INDEXED_VIEW_LIST_HOLE = 46
-MISC_INDEXED_VIEW_LIST_TOO_BIG = 18
-MISC_INTERNAL = 21
-MISC_INTERRUPT = 10
-MISC_IN_USE = 3
-MISC_LOCKED_BY = 4
-MISC_MISSING_INSTANCE = 8
-MISC_NODE_IS_READONLY = 13
-MISC_NODE_WAS_READONLY = 14
-MISC_NOT_IMPLEMENTED = 43
-MISC_NO_SUCH_FILE = 2
-MISC_OPERATION_NOT_SUPPORTED = 38
-MISC_PROTO_USAGE = 23
-MISC_REACHED_MAX_RETRIES = 56
-MISC_RESOLVE_NEEDED = 53
-MISC_RESOURCE_DENIED = 6
-MISC_ROLLBACK_DISABLED = 1
-MISC_ROTATE_LIST_KEY = 58
-MISC_SNMP_BAD_INDEX = 42
-MISC_SNMP_BAD_VALUE = 41
-MISC_SNMP_ERROR = 39
-MISC_SNMP_TIMEOUT = 40
-MISC_SUBAGENT_DOWN = 24
-MISC_SUBAGENT_ERROR = 25
-MISC_TOO_MANY_SESSIONS = 11
-MISC_TOO_MANY_TRANSACTIONS = 12
-MISC_TRANSACTION_CONFLICT = 54
-MISC_UNSUPPORTED_XML_ENCODING = 57
-MISC_UPGRADE_IN_PROGRESS = 15
-MISC_WHEN_FAILED = 9
-MISC_XPATH_COMPILE = 51
-NCS_BAD_AUTHGROUP_CALLBACK_RESPONSE = 104
-NCS_BAD_CAPAS = 14
-NCS_CALL_HOME = 107
-NCS_CLI_LOAD = 19
-NCS_COMMIT_QUEUED = 20
-NCS_COMMIT_QUEUED_AND_DELETED = 113
-NCS_COMMIT_QUEUE_DISABLED = 111
-NCS_COMMIT_QUEUE_HAS_OVERLAPPING = 103
-NCS_COMMIT_QUEUE_HAS_SENTINEL = 75
-NCS_CONFIG_LOCKED = 84
-NCS_CONFLICTING_INTENT = 125
-NCS_CONNECTION_CLOSED = 10
-NCS_CONNECTION_REFUSED = 5
-NCS_CONNECTION_TIMEOUT = 8
-NCS_CQ_BLOCK_OTHERS = 21
-NCS_CQ_REMOTE_NOT_ENABLED = 22
-NCS_DEV_AUTH_FAILED = 1
-NCS_DEV_IN_USE = 81
-NCS_HOST_LOOKUP = 12
-NCS_LOCKED = 3
-NCS_NCS_ACTION_NO_TRANSACTION = 67
-NCS_NCS_ALREADY_EXISTS = 82
-NCS_NCS_CLUSTER_AUTH_FAILED = 74
-NCS_NCS_DEV_ERROR = 69
-NCS_NCS_ERROR = 68
-NCS_NCS_ERROR_IKP = 70
-NCS_NCS_LOAD_TEMPLATE_COPY_TREE_CROSS_NS = 96
-NCS_NCS_LOAD_TEMPLATE_DUPLICATE_MACRO = 119
-NCS_NCS_LOAD_TEMPLATE_EOF_XML = 33
-NCS_NCS_LOAD_TEMPLATE_EXTRA_MACRO_VARS = 118
-NCS_NCS_LOAD_TEMPLATE_INVALID_CBTYPE = 128
-NCS_NCS_LOAD_TEMPLATE_INVALID_PI_REGEX = 122
-NCS_NCS_LOAD_TEMPLATE_INVALID_PI_SYNTAX = 86
-NCS_NCS_LOAD_TEMPLATE_INVALID_VALUE_XML = 30
-NCS_NCS_LOAD_TEMPLATE_MISPLACED_IF_NED_ID_MATCH_XML = 121
-NCS_NCS_LOAD_TEMPLATE_MISPLACED_IF_NED_ID_XML = 110
-NCS_NCS_LOAD_TEMPLATE_MISSING_ELEMENT2_XML = 98
-NCS_NCS_LOAD_TEMPLATE_MISSING_ELEMENT_XML = 29
-NCS_NCS_LOAD_TEMPLATE_MISSING_MACRO_VARS = 117
-NCS_NCS_LOAD_TEMPLATE_MULTIPLE_ELEMENTS_XML = 38
-NCS_NCS_LOAD_TEMPLATE_MULTIPLE_KEY_LEAFS_XML = 77
-NCS_NCS_LOAD_TEMPLATE_MULTIPLE_SP_XML = 35
-NCS_NCS_LOAD_TEMPLATE_SHADOWED_NED_ID_XML = 109
-NCS_NCS_LOAD_TEMPLATE_TAG_AMBIGUOUS_XML = 102
-NCS_NCS_LOAD_TEMPLATE_TRAILING_XML = 32
-NCS_NCS_LOAD_TEMPLATE_UNCLOSED_PI = 88
-NCS_NCS_LOAD_TEMPLATE_UNEXPECTED_PI = 89
-NCS_NCS_LOAD_TEMPLATE_UNKNOWN_ATTRIBUTE_XML = 31
-NCS_NCS_LOAD_TEMPLATE_UNKNOWN_ELEMENT2_XML = 97
-NCS_NCS_LOAD_TEMPLATE_UNKNOWN_ELEMENT_XML = 36
-NCS_NCS_LOAD_TEMPLATE_UNKNOWN_MACRO = 116
-NCS_NCS_LOAD_TEMPLATE_UNKNOWN_NED_ID_XML = 99
-NCS_NCS_LOAD_TEMPLATE_UNKNOWN_NS_XML = 37
-NCS_NCS_LOAD_TEMPLATE_UNKNOWN_PI = 85
-NCS_NCS_LOAD_TEMPLATE_UNKNOWN_SP_XML = 34
-NCS_NCS_LOAD_TEMPLATE_UNMATCHED_PI = 87
-NCS_NCS_LOAD_TEMPLATE_UNSUPPORTED_NED_ID_AT_TAG_XML = 101
-NCS_NCS_LOAD_TEMPLATE_UNSUPPORTED_NED_ID_XML = 100
-NCS_NCS_LOAD_TEMPLATE_UNSUPPORTED_NETCONF_YANG_ATTRIBUTES = 126
-NCS_NCS_MISSING_CLUSTER_AUTH = 73
-NCS_NCS_MISSING_VARIABLES = 52
-NCS_NCS_NED_MULTI_ERROR = 76
-NCS_NCS_NO_CAPABILITIES = 64
-NCS_NCS_NO_DIFF = 71
-NCS_NCS_NO_FORWARD_DIFF = 72
-NCS_NCS_NO_NAMESPACE = 65
-NCS_NCS_NO_SP_TEMPLATE = 48
-NCS_NCS_NO_TEMPLATE = 47
-NCS_NCS_NO_TEMPLATE_XML = 23
-NCS_NCS_NO_WRITE_TRANSACTION = 66
-NCS_NCS_OPERATION_LOCKED = 83
-NCS_NCS_PACKAGE_SYNC_MISMATCHED_LOAD_PATH = 123
-NCS_NCS_SERVICE_CONFLICT = 78
-NCS_NCS_TEMPLATE_CONTEXT_NODE_NOEXISTS = 90
-NCS_NCS_TEMPLATE_COPY_TREE_BAD_OP = 94
-NCS_NCS_TEMPLATE_FOREACH = 51
-NCS_NCS_TEMPLATE_FOREACH_XML = 28
-NCS_NCS_TEMPLATE_GUARD_LENGTH = 59
-NCS_NCS_TEMPLATE_GUARD_LENGTH_XML = 44
-NCS_NCS_TEMPLATE_INSERT = 55
-NCS_NCS_TEMPLATE_INSERT_XML = 40
-NCS_NCS_TEMPLATE_LONE_GUARD = 57
-NCS_NCS_TEMPLATE_LONE_GUARD_XML = 42
-NCS_NCS_TEMPLATE_LOOP_PREVENTION = 95
-NCS_NCS_TEMPLATE_MISSING_VALUE = 56
-NCS_NCS_TEMPLATE_MISSING_VALUE_XML = 41
-NCS_NCS_TEMPLATE_MOVE = 60
-NCS_NCS_TEMPLATE_MOVE_XML = 45
-NCS_NCS_TEMPLATE_MULTIPLE_CONTEXT_NODES = 92
-NCS_NCS_TEMPLATE_NOT_CREATED = 80
-NCS_NCS_TEMPLATE_NOT_CREATED_XML = 79
-NCS_NCS_TEMPLATE_ORDERED_LIST = 54
-NCS_NCS_TEMPLATE_ORDERED_LIST_XML = 39
-NCS_NCS_TEMPLATE_ROOT_LEAF_LIST = 93
-NCS_NCS_TEMPLATE_SAVED_CONTEXT_NOEXISTS = 91
-NCS_NCS_TEMPLATE_STR2VAL = 61
-NCS_NCS_TEMPLATE_STR2VAL_XML = 46
-NCS_NCS_TEMPLATE_UNSUPPORTED_NED_ID = 112
-NCS_NCS_TEMPLATE_VALUE_LENGTH = 58
-NCS_NCS_TEMPLATE_VALUE_LENGTH_XML = 43
-NCS_NCS_TEMPLATE_WHEN = 50
-NCS_NCS_TEMPLATE_WHEN_KEY_XML = 27
-NCS_NCS_TEMPLATE_WHEN_XML = 26
-NCS_NCS_XPATH = 53
-NCS_NCS_XPATH_COMPILE = 49
-NCS_NCS_XPATH_COMPILE_XML = 24
-NCS_NCS_XPATH_VARBIND = 63
-NCS_NCS_XPATH_XML = 25
-NCS_NED_EXTERNAL_ERROR = 6
-NCS_NED_INTERNAL_ERROR = 7
-NCS_NED_OFFLINE_UNAVAILABLE = 108
-NCS_NED_OUT_OF_SYNC = 18
-NCS_NONED = 15
-NCS_NO_EXISTS = 2
-NCS_NO_TEMPLATE = 62
-NCS_NO_YANG_MODULES = 16
-NCS_NS_SUPPORT = 13
-NCS_OVERLAPPING_PRESENCE_AND_ABSENCE_ASSERTION_COMPLIANCE_TEMPLATE = 127
-NCS_OVERLAPPING_STRICT_ASSERTION_COMPLIANCE_TEMPLATE = 129
-NCS_PLAN_LOCATION = 120
-NCS_REVDROP = 17
-NCS_RPC_ERROR = 9
-NCS_SERVICE_CREATE = 0
-NCS_SERVICE_DELETE = 2
-NCS_SERVICE_UPDATE = 1
-NCS_SESSION_LIMIT_EXCEEDED = 115
-NCS_SOUTHBOUND_LOCKED = 4
-NCS_UNKNOWN_NED_ID = 105
-NCS_UNKNOWN_NED_IDS_COMPLIANCE_TEMPLATE = 124
-NCS_UNKNOWN_NED_ID_DEVICE_TEMPLATE = 106
-NCS_XML_PARSE = 11
-NCS_YANGLIB_NO_SCHEMA_FOR_RUNNING = 114
-OPERATION_CASE_EXISTS = 13
-PATCH_FLAG_AAA_CHECKED = 8
-PATCH_FLAG_BUFFER_DAMPENED = 2
-PATCH_FLAG_FILTER = 4
-PATCH_FLAG_INCOMPLETE = 1
-WORKER_SOCKET = 1
-```
diff --git a/developer-reference/pyapi/ncs.experimental.md b/developer-reference/pyapi/ncs.experimental.md
deleted file mode 100644
index 062268a8..00000000
--- a/developer-reference/pyapi/ncs.experimental.md
+++ /dev/null
@@ -1,242 +0,0 @@
-# Python ncs.experimental Module
-
-Experimental stuff.
-
-This module contains experimental and totally unsupported things that
-may change or disappear at any time in the future. If used, it must be
-explicitly imported.
-
-## Classes
-
-### _class_ **DataCallbacks**
-
-High-level API for implementing data callbacks.
-
-Higher level abstraction for the DP API. Currently supports read
-operations only, as such it is suitable for config false; data.
-
-Registered callbacks are searched for in registration order. Most
-specific points must be registered first.
-
-args parameter to handler callbacks is a dictionary with keys
-matching list names in the keypath. If multiple lists with the
-same name exists the keys are named list-0, list-1 etc where 0 is
-the top-most list with name list. Values in the dictionary are
-python types (.as_pyval()), if the list has multiple keys it is
-set as a list else the single key value is set.
-
-Example args for keypath
-/root/single-key-list{name}/conflict{first}/conflict{second}/multi{1 one}
-
- {'single-key-list': 'name',
- 'conflict-0': 'first',
- 'conflict-1': 'second',
- 'multi': [1, 'one']}
-
-Example handler and registration:
-
- class Handler(object):
- def get_object(self, tctx, kp, args):
- return {'leaf1': 'value', 'leaf2': 'value'}
-
- def get_next(self, tctx, kp, args, next):
- return None
-
- def count(self):
- return 0
-
- dcb = DataCallbacks(log)
- dcb.register('/namespace:container', Handler())
- _confd.dp.register_data_cb(dd.ctx(), example_ns.callpoint_handler, dcb)
-
-```python
-DataCallbacks(log)
-```
-
-Members:
-
-
-
-cb_exists_optional(...)
-
-Method:
-
-```python
-cb_exists_optional(self, tctx, kp)
-```
-
-low-level cb_exists_optional implementation
-
-
-
-
-
-cb_get_case(...)
-
-Method:
-
-```python
-cb_get_case(self, tctx, kp, choice)
-```
-
-low-level cb_get_case implementation
-
-
-
-
-
-cb_get_elem(...)
-
-Method:
-
-```python
-cb_get_elem(self, tctx, kp)
-```
-
-low-level cb_elem implementation
-
-
-
-
-
-cb_get_next(...)
-
-Method:
-
-```python
-cb_get_next(self, tctx, kp, next)
-```
-
-low-level cb_get_next implementation
-
-
-
-
-
-cb_get_next_object(...)
-
-Method:
-
-```python
-cb_get_next_object(self, tctx, kp, next)
-```
-
-low-level cb_get_next_object implementation
-
-
-
-
-
-cb_get_object(...)
-
-Method:
-
-```python
-cb_get_object(self, tctx, kp)
-```
-
-low-level cb_get_object implementation
-
-
-
-
-
-cb_num_instances(...)
-
-Method:
-
-```python
-cb_num_instances(self, tctx, kp)
-```
-
-low-level cb_num_instances implementation
-
-
-
-
-
-register(...)
-
-Method:
-
-```python
-register(self, path, handler)
-```
-
-Register data handler for path.
-
-If handler is a type it will be instantiated with the DataCallbacks
-log as the only parameter.
-
-The following methods will be called on the handler:
-
-* get_object(kp, args)
-
- Return single object as dictionary.
-
-* get_next(kp, args, next)
-
- Return next object as dictionary, list of dictionaries can be
- returned to use result caching reducing the amount of calls
- required.
-
-* count(kp, args)
-
- Return number of elements in list.
-
-
-
-### _class_ **Query**
-
-Class encapsulating a MAAPI query operation.
-
-Supports the pattern of executing a query and iterating over the result
-sets as they are requested. The class handles the calls to query_start,
-query_result and query_stop, which means that one can focus on describing
-the query and handle the result.
-
-Example query:
-
- with Query(trans, 'device', '/devices', ['name', 'address', 'port'],
- result_as=ncs.QUERY_TAG_VALUE) as q:
- for r in q:
- print(r)
-
-```python
-Query(trans, expr, context_node, select, chunk_size=1000, initial_offset=1, result_as=3, sort=[])
-```
-
-Initialize a Query.
-
-Members:
-
-
-
-next(...)
-
-Method:
-
-```python
-next(self)
-```
-
-Get the next query result row.
-
-
-
-
-
-stop(...)
-
-Method:
-
-```python
-stop(self)
-```
-
-Stop the running query.
-
-Any resources associated with the query will be released.
-
-
-
diff --git a/developer-reference/pyapi/ncs.log.md b/developer-reference/pyapi/ncs.log.md
deleted file mode 100644
index 20b3961a..00000000
--- a/developer-reference/pyapi/ncs.log.md
+++ /dev/null
@@ -1,517 +0,0 @@
-# Python ncs.log Module
-
-This module provides some logging utilities.
-
-## Functions
-
-### init_logging
-
-```python
-init_logging(vmid, log_file, log_level)
-```
-
-Initialize logging
-
-### log_datefmt
-
-```python
-log_datefmt()
-```
-
-Return date format used in logging.
-
-### log_file
-
-```python
-log_file()
-```
-
-Return log file used, if any else None
-
-### log_format
-
-```python
-log_format()
-```
-
-Return log format.
-
-### log_handler
-
-```python
-log_handler()
-```
-
-Return log handler used, if any else None
-
-### mk_log_formatter
-
-```python
-mk_log_formatter()
-```
-
-Create log formatter with log and date format setup
-
-### reopen_logs
-
-```python
-reopen_logs()
-```
-
-Re-open log files if log handler is set
-
-### set_log_level
-
-```python
-set_log_level(vmid, log_level)
-```
-
-Set log level on the vmid logger and root logger
-
-
-## Classes
-
-### _class_ **Log**
-
-A log helper class.
-
-This class makes it easier to write log entries. It encapsulates
-another log object that supports Python standard log interface, and
-makes it easier to format the log message be adding the ability to
-support multiple arguments.
-
-Example use:
-
- import logging
- import confd.log
-
- logger = logging.getLogger(__name__)
- mylog = confd.log.Log(logger)
-
- count = 3
- name = 'foo'
- mylog.debug('got ', count, ' values from ', name)
-
-```python
-Log(logobject, add_timestamp=False)
-```
-
-Initialize a Log object.
-
-The argument 'logobject' is mandatory and can be any object which
-should support as least one of the standard log methods (info, warning,
-error, critical, debug). If 'add_timestamp' is set to True a time stamp
-will precede your log message.
-
-Members:
-
-
-
-critical(...)
-
-Method:
-
-```python
-critical(self, *args)
-```
-
-Log a critical message.
-
-
-
-
-
-debug(...)
-
-Method:
-
-```python
-debug(self, *args)
-```
-
-Log a debug message.
-
-
-
-
-
-error(...)
-
-Method:
-
-```python
-error(self, *args)
-```
-
-Log an error message.
-
-
-
-
-
-exception(...)
-
-Method:
-
-```python
-exception(self, *args)
-```
-
-Log an exception message.
-
-
-
-
-
-fatal(...)
-
-Method:
-
-```python
-fatal(self, *args)
-```
-
-Just calls critical().
-
-
-
-
-
-info(...)
-
-Method:
-
-```python
-info(self, *args)
-```
-
-Log an information message.
-
-
-
-
-
-warning(...)
-
-Method:
-
-```python
-warning(self, *args)
-```
-
-Log a warning message.
-
-
-
-### _class_ **ParentProcessLogHandler**
-
-
-```python
-ParentProcessLogHandler(log_q)
-```
-
-Members:
-
-
-
-acquire(...)
-
-Method:
-
-```python
-acquire(self)
-```
-
-Acquire the I/O thread lock.
-
-
-
-
-
-addFilter(...)
-
-Method:
-
-```python
-addFilter(self, filter)
-```
-
-Add the specified filter to this handler.
-
-
-
-
-
-close(...)
-
-Method:
-
-```python
-close(self)
-```
-
-Tidy up any resources used by the handler.
-
-This version removes the handler from an internal map of handlers,
-_handlers, which is used for handler lookup by name. Subclasses
-should ensure that this gets called from overridden close()
-methods.
-
-
-
-
-
-createLock(...)
-
-Method:
-
-```python
-createLock(self)
-```
-
-Acquire a thread lock for serializing access to the underlying I/O.
-
-
-
-
-
-emit(...)
-
-Method:
-
-```python
-emit(self, record)
-```
-
-Emit log record by sending a pre-formatted record to the parent
-process
-
-
-
-
-
-filter(...)
-
-Method:
-
-```python
-filter(self, record)
-```
-
-Determine if a record is loggable by consulting all the filters.
-
-The default is to allow the record to be logged; any filter can veto
-this by returning a false value.
-If a filter attached to a handler returns a log record instance,
-then that instance is used in place of the original log record in
-any further processing of the event by that handler.
-If a filter returns any other true value, the original log record
-is used in any further processing of the event by that handler.
-
-If none of the filters return false values, this method returns
-a log record.
-If any of the filters return a false value, this method returns
-a false value.
-
-.. versionchanged:: 3.2
-
- Allow filters to be just callables.
-
-.. versionchanged:: 3.12
- Allow filters to return a LogRecord instead of
- modifying it in place.
-
-
-
-
-
-flush(...)
-
-Method:
-
-```python
-flush(self)
-```
-
-Flushes the stream.
-
-
-
-
-
-format(...)
-
-Method:
-
-```python
-format(self, record)
-```
-
-Format the specified record.
-
-If a formatter is set, use it. Otherwise, use the default formatter
-for the module.
-
-
-
-
-
-get_name(...)
-
-Method:
-
-```python
-get_name(self)
-```
-
-
-
-
-
-
-handle(...)
-
-Method:
-
-```python
-handle(self, record)
-```
-
-Conditionally emit the specified logging record.
-
-Emission depends on filters which may have been added to the handler.
-Wrap the actual emission of the record with acquisition/release of
-the I/O thread lock.
-
-Returns an instance of the log record that was emitted
-if it passed all filters, otherwise a false value is returned.
-
-
-
-
-
-handleError(...)
-
-Method:
-
-```python
-handleError(self, record)
-```
-
-Handle errors which occur during an emit() call.
-
-This method should be called from handlers when an exception is
-encountered during an emit() call. If raiseExceptions is false,
-exceptions get silently ignored. This is what is mostly wanted
-for a logging system - most users will not care about errors in
-the logging system, they are more interested in application errors.
-You could, however, replace this with a custom handler if you wish.
-The record which was being processed is passed in to this method.
-
-
-
-
-
-name
-
-
-
-
-
-
-release(...)
-
-Method:
-
-```python
-release(self)
-```
-
-Release the I/O thread lock.
-
-
-
-
-
-removeFilter(...)
-
-Method:
-
-```python
-removeFilter(self, filter)
-```
-
-Remove the specified filter from this handler.
-
-
-
-
-
-setFormatter(...)
-
-Method:
-
-```python
-setFormatter(self, fmt)
-```
-
-Set the formatter for this handler.
-
-
-
-
-
-setLevel(...)
-
-Method:
-
-```python
-setLevel(self, level)
-```
-
-Set the logging level of this handler. level must be an int or a str.
-
-
-
-
-
-setStream(...)
-
-Method:
-
-```python
-setStream(self, stream)
-```
-
-Sets the StreamHandler's stream to the specified value,
-if it is different.
-
-Returns the old stream, if the stream was changed, or None
-if it wasn't.
-
-
-
-
-
-set_name(...)
-
-Method:
-
-```python
-set_name(self, name)
-```
-
-
-
-
-
-
-terminator
-
-```python
-terminator = '\n'
-```
-
-
-
-
diff --git a/developer-reference/pyapi/ncs.maagic.md b/developer-reference/pyapi/ncs.maagic.md
deleted file mode 100644
index 26964e75..00000000
--- a/developer-reference/pyapi/ncs.maagic.md
+++ /dev/null
@@ -1,1391 +0,0 @@
-# Python ncs.maagic Module
-
-Confd/NCS data access module.
-
-This module implements classes and function for easy access to the data store.
-There is no need to manually instantiate any of the classes herein. The only
-functions that should be used are cd(), get_node() and get_root().
-
-## Functions
-
-### as_pyval
-
-```python
-as_pyval(mobj, name_type=3, include_oper=False, enum_as_string=True)
-```
-
-Convert maagic object to python value.
-
-The types are converted as follows:
-
-* List is converted to list.
-* Container is converted to dict.
-* Leaf is converted to python value.
-* EmptyLeaf is converted to bool.
-* ActionParams is converted to dict.
-
-If include_oper is False and and a oper Node is
-passed then None is returned.
-
-Arguments:
-
-* mobj -- maagic object (maagic.Enum, maagic.Bits, maagic.Node)
-* name_type -- one of NODE_NAME_SHORT, NODE_NAME_FULL,
-NODE_NAME_PY_SHORT and NODE_NAME_PY_FULL and controls dictionary
-key names
-* include_oper -- include operational data (boolean)
-* enum_as_string -- return enumerator in str form (boolean)
-
-### cd
-
-```python
-cd(node, path)
-```
-
-Return the node at path 'path', starting from node 'node'.
-
-Arguments:
-
-* path -- relative or absolute keypath as a string (HKeypathRef or
- maagic.Node)
-
-Returns:
-
-* node (maagic.Node)
-
-### get_maapi
-
-```python
-get_maapi(obj)
-```
-
-Get Maapi object from obj.
-
-Return Maapi object from obj. raise BackendError if
-provided object does not contain a Maapi object.
-
-Arguements:
-
-* object (obj)
-
-Returns:
-
-* maapi object (maapi.Maapi)
-
-### get_memory_node
-
-```python
-get_memory_node(backend_or_node, path)
-```
-
-Return a Node at 'path' using 'backend' only for schema information.
-
-All operations towards the returned Node is cached in memory and not
-communicated to the server. This can be useful for effectively building a
-large data set which can later be converted to a TagValue array by calling
-get_tagvalues() or written directly to the server by calling
-set_memory_tree() and shared_set_memory_tree().
-
-Arguments:
-
-* backend_or_node -- backend or node object for reading schema
- information under mount points (maagic.Node,
- maapi.Transaction or maapi.Maapi)
-* path -- absolute keypath as a string (HKeypathRef or maagic.Node)
-
-Example use:
-
- conf = ncs.maagic.get_memory_node(t, '/ncs:devices/device{ce0}/conf')
-
-### get_memory_root
-
-```python
-get_memory_root(backend_or_node)
-```
-
-Return Root object with a memory-only backend.
-
-The passed in 'backend' is only used to read schema information when
-traversing past a mount point. All operations towards the returned Node is
-cached in memory and not communicated to the server.
-
-Arguments:
-
-* backend_or_node -- backend or node object for reading schema
- information under mount points (maagic.Node,
- maapi.Transaction or maapi.Maapi)
-
-### get_node
-
-```python
-get_node(backend_or_node, path, shared=False)
-```
-
-Return the node at path 'path' using 'backend'.
-
-Arguments:
-
-* backend_or_node -- backend object (maapi.Transaction, maapi.Maapi or None)
- or maapi.Node.
-* path -- relative or absolute keypath as a string (HKeypathRef or
- maagic.Node). Relative paths are only supported if backend_or_node
- is a maagic.Node.
-* shared -- if set to 'True', fastmap-friendly maapi calls, such as
- shared_set_elem, will be used within the returned tree (boolean)
-
-Example use:
-
- node = ncs.maagic.get_node(t, '/ncs:devices/device{ce0}')
-
-### get_root
-
-```python
-get_root(backend=None, shared=False)
-```
-
-Return a Root object for 'backend'.
-
-If 'backend' is a Transaction object, the returned Maagic object can be
-used to read and write transactional data. When 'backend' is a Maapi
-object you cannot read and write data, however, you may use the Maagic
-object to call an action (that doesn't require a transaction).
-If 'backend' is a Node object the underlying Transaction or Maapi object
-will be used (if any), otherwise backend will be assumed to be None.
-'backend' may also be None (default) in which case the returned Maagic
-object is not connected to NCS in any way. You can still use the maagic
-object to build an in-memory tree which may be converted to an array
-of TagValue objects.
-
-Arguments:
-
-* backend -- backend object (maagic.Node, maapi.Transaction, maapi.Maapi
- or None)
-* shared -- if set to 'True', fastmap-friendly maapi calls, such as
- shared_set_elem, will be used within the returned tree (boolean)
-
-Returns:
-
-* root node (maagic.Root)
-
-Example use:
-
- with ncs.maapi.Maapi() as m:
- with ncs.maapi.Session(m, 'admin', 'python'):
- root = ncs.maagic.get_root(m)
-
-### get_tagvalues
-
-```python
-get_tagvalues(node)
-```
-
-Return a list of TagValue's representing 'node'.
-
-Arguments:
-
-* node -- A Node object.
-
-### get_trans
-
-```python
-get_trans(node_or_trans)
-```
-
-Get Transaction object from node_or_trans.
-
-Return Transaction object from node_or_trans. Raise BackendError if
-provided object does not contain a Transaction object.
-
-### set_memory_tree
-
-```python
-set_memory_tree(node, trans_obj=None)
-```
-
-Calls Maapi.set_values() using using TagValue's from 'node'.
-
-The backend specified when obtaining the initial node, most likely by using
-'get_memory_node()' or 'get_memory_root()', will be used if that is a
-maapi.Transaction backend, otherwise 'trans_obj' will be used.
-
-Arguments:
-
-* node -- a Node object (Node)
-* trans_obj -- another transaction object to use in case node's backend is
- not a transaction backend (Node or maapi.Transaction)
-
-### set_values_xml
-
-```python
-set_values_xml(node, xml)
-```
-
-Parses the XML document in 'xml' and sets values in the transaction.
-
-The XML document must be explicit with regards to namespaces and tags and
-the top node must represent the corresponding 'node' object.
-
-### shared_set_memory_tree
-
-```python
-shared_set_memory_tree(node, trans_obj=None)
-```
-
-Calls Maapi.shared_set_values() using using TagValue's from 'node'.
-
-For use in FASTMAP code (services). See set_memory_tree().
-
-### shared_set_values_xml
-
-```python
-shared_set_values_xml(node, xml)
-```
-
-Parses the XML document in 'xml' and sets values in the transaction.
-
-The XML document must be explicit with regards to namespaces and tags and
-the top node must represent the corresponding 'node' object. This variant
-is to be used in services where FASTMAP attributes must be preserved.
-
-
-## Classes
-
-### _class_ **Action**
-
-Represents a tailf:action node.
-
-```python
-Action(backend, cs_node, parent=None)
-```
-
-Initialize an Action node. Should not be called explicitly.
-
-Members:
-
-
-
-get_input(...)
-
-Method:
-
-```python
-get_input(self)
-```
-
-Return a node tree representing the input node of this action.
-
-Returns:
-
-* action inputs (maagic.ActionParams)
-
-
-
-
-
-get_output(...)
-
-Method:
-
-```python
-get_output(self)
-```
-
-Return a node tree representing the output node of this action.
-
-Note that this does not actually request the action.
-Should not normally be called explicitly.
-
-Returns:
-
-* action outputs (maagic.ActionParams)
-
-
-
-
-
-request(...)
-
-Method:
-
-```python
-request(self, params=None)
-```
-
-Request the action and return the result as an ActionParams node.
-
-Arguments:
-
-* params -- input parameters of the action (maagic.ActionParams,
- optional)
-
-Returns:
-
-* outparams -- output parameters of the action (maagic.ActionParams)
-
-
-
-### _class_ **ActionParams**
-
-Represents the input or output parameters of a tailf:action.
-
-The ActionParams node is the root of a tree representing either the input
-or the output parameters of an action. Action parameters can be read and
-set just like any other nodes in the tree.
-
-```python
-ActionParams(cs_node, parent, output=False)
-```
-
-Initialize an ActionParams node.
-
-Should not be called explicitly. Use 'get_input()' on an Action node
-to retrieve its input parameters or 'request()' to request the action
-and obtain the output parameters.
-
-Members:
-
-_None_
-
-### _class_ **BackendError**
-
-Exception type used within maagic backends.
-
-Members:
-
-
-
-add_note(...)
-
-Method:
-
-Exception.add_note(note) --
-add a note to the exception
-
-
-
-
-
-args
-
-
-
-
-
-
-with_traceback(...)
-
-Method:
-
-Exception.with_traceback(tb) --
-set self.__traceback__ to tb and return self.
-
-
-
-### _class_ **Bits**
-
-Representation of a YANG bits leaf with position > 63.
-
-```python
-Bits(value, cs_node=None)
-```
-
-Initialize a Bits object.
-
-Note that a Bits object has no connection to the YANG model and will
-not check that the given value matches the string representation
-according to the schema. Normally it is not necessary to create
-Bits objects using this constructor as bits leaves can be set using
-bytearrays alone.
-
-Attributes:
-
-* value -- a Value object of type C_BITBIG
-* cs_node -- a CsNode representing the YANG bits leaf. Without this
- you cannot get a string representation of the bits
- value; in that case repr(self) will be returned for
- the str() call. (default: None)
-
-Members:
-
-
-
-bytearray(...)
-
-Method:
-
-```python
-bytearray(self)
-```
-
-Return a 'little-endian' byte array.
-
-
-
-
-
-clr_bit(...)
-
-Method:
-
-```python
-clr_bit(self, position)
-```
-
-Clear a bit at a specific position in the internal byte array.
-
-
-
-
-
-is_bit_set(...)
-
-Method:
-
-```python
-is_bit_set(self, position)
-```
-
-Check if a bit at a specific position is set.
-
-
-
-
-
-set_bit(...)
-
-Method:
-
-```python
-set_bit(self, position)
-```
-
-Set a bit at a specific position in the internal byte array.
-
-
-
-### _class_ **Case**
-
-Represents a case node.
-
-If this case node has any nested choice nodes, those will appear as
-children of this object.
-
-```python
-Case(backend, cs_node, cs_case, parent)
-```
-
-Initialize a Case node. Should not be called explicitly.
-
-Members:
-
-_None_
-
-### _class_ **Choice**
-
-Represents a choice node.
-
-```python
-Choice(backend, cs_node, cs_choice, parent)
-```
-
-Initialize a Choice node. Should not be called explicitly.
-
-Members:
-
-
-
-get_value(...)
-
-Method:
-
-```python
-get_value(self)
-```
-
-Return the currently selected case of this choice.
-
-The case is returned as a Case node. If no case is selected for this
-choice, None is returned.
-
-Returns:
-
-* current selection of choice (maagic.Case)
-
-
-
-### _class_ **Container**
-
-Represents a YANG container.
-
-A (non-presence) container node or a list element, contains other nodes.
-
-```python
-Container(backend, cs_node, parent=None, children=None)
-```
-
-Initialize Container node. Should not be called explicitly.
-
-Members:
-
-
-
-delete(...)
-
-Method:
-
-```python
-delete(self)
-```
-
-Delete the container.
-
-Deletes all nodes inside the container. The container itself is not
-affected as it carries no state of its own.
-
-Example use:
-
- root.container.delete()
-
-
-
-### _class_ **Empty**
-
-Simple represention of a yang empty value.
-
-This is used to represent an empty value in unions and list keys.
-
-```python
-Empty()
-```
-
-Initialize an Empty object.
-
-Members:
-
-_None_
-
-### _class_ **EmptyLeaf**
-
-Represents a leaf with the type "empty".
-
-```python
-EmptyLeaf(backend, cs_node, parent=None)
-```
-
-Initialize an EmptyLeaf node. Should not be called explicitly.
-
-Members:
-
-
-
-create(...)
-
-Method:
-
-```python
-create(self)
-```
-
-Create and return this leaf in the data tree.
-
-
-
-
-
-delete(...)
-
-Method:
-
-```python
-delete(self)
-```
-
-Delete this leaf from the data tree.
-
-
-
-
-
-exists(...)
-
-Method:
-
-```python
-exists(self)
-```
-
-Return True if this leaf exists in the data tree.
-
-
-
-### _class_ **Enum**
-
-Simple represention of a YANG enumeration instance.
-
-Contains the string and integer representation of the enumeration.
-An Enum object supports comparisons with other 'Enum' objects as well as
-with other objects. For equality checks, strings, numbers, 'Enum' objects
-and 'Value' objects are allowed. For relational operators,
-all of the above except strings are acceptable.
-
-Attributes:
-
-* string -- string representation of the enumeration
-* value -- integer representation of the enumeration
-
-```python
-Enum(string, value)
-```
-
-Initialize an Enum object from a given string and integer.
-
-Note that an Enum object has no connection to the YANG model and will
-not check that the given value matches the string representation
-according to the schema. Normally it is not necessary to create
-Enum objects using this constructor as enum leaves can be set using
-strings alone.
-
-Arguments:
-
-* string -- string representation of the enumeration (str)
-* value -- integer representation of the enumeration (int)
-
-Members:
-
-_None_
-
-### _class_ **Leaf**
-
-Base class for leaf nodes.
-
-Subclassed by NonEmptyLeaf, EmptyLeaf and LeafList.
-
-```python
-Leaf(backend, cs_node, parent=None)
-```
-
-Initialize Leaf node. Should not be called explicitly.
-
-Members:
-
-
-
-delete(...)
-
-Method:
-
-```python
-delete(self)
-```
-
-Delete this leaf from the data tree.
-
-Example use:
-
- root.model.leaf.delete()
-
-
-
-### _class_ **LeafList**
-
-Represents a leaf-list node.
-
-```python
-LeafList(backend, cs_node, parent=None)
-```
-
-Initialize a LeafList node. Should not be called explicitly.
-
-Members:
-
-
-
-as_list(...)
-
-Method:
-
-```python
-as_list(self)
-```
-
-Return leaf-list values in a list.
-
-Returns:
-
-* leaf list values (list)
-
-Example use:
-
- root.model.ll.as_list()
-
-
-
-
-
-create(...)
-
-Method:
-
-```python
-create(self, key)
-```
-
-Create a new leaf-list item.
-
-Arguments:
-
-* key -- item key (str or maapi.Key)
-
-Example use:
-
- root.model.ll.create('example')
-
-
-
-
-
-delete(...)
-
-Method:
-
-```python
-delete(self)
-```
-
-Delete the entire leaf-list.
-
-Example use:
-
- root.model.ll.delete()
-
-
-
-
-
-exists(...)
-
-Method:
-
-```python
-exists(self)
-```
-
-Return true if the leaf-list exists (has values) in the data tree.
-
-Example use:
-
- if root.model.ll.exists():
- do_things()
-
-
-
-
-
-remove(...)
-
-Method:
-
-```python
-remove(self, key)
-```
-
-Remove a specific leaf-list item'.
-
-Arguments:
-
-* key -- item key (str or maapi.Key)
-
-Example use:
-
- root.model.ll.remove('example')
-
-
-
-
-
-set_value(...)
-
-Method:
-
-```python
-set_value(self, value)
-```
-
-Set this leaf-list using a python list.
-
-
-
-### _class_ **LeafListIterator**
-
-LeafList iterator.
-
-An instance of this class will be returned when iterating a leaf-list.
-
-```python
-LeafListIterator(lst)
-```
-
-Initialize this object.
-
-An instance of this class will be created when iteration of a
-leaf-list starts. Should not be called explicitly.
-
-Members:
-
-
-
-delete(...)
-
-Method:
-
-```python
-delete(self)
-```
-
-Delete the iterator.
-
-
-
-
-
-next(...)
-
-Method:
-
-```python
-next(self)
-```
-
-Get the next value from the iterator.
-
-
-
-### _class_ **List**
-
-Represents a list node.
-
-A list can be treated mostly like a python dictionary. It supports
-indexing, iteration, the len function, and the in and del operators.
-New items must, however, be created explicitly using the 'create' method.
-
-```python
-List(backend, cs_node, parent=None)
-```
-
-Initialize a List node. Should not be called explicitly.
-
-Members:
-
-
-
-create(...)
-
-Method:
-
-```python
-create(self, *keys)
-```
-
-Create and return a new list item with the key '*keys'.
-
-Arguments can be a single 'maapi.Key' object or one value for each key
-in the list. For a keyless oper or in-memory list (eg in action
-parameters), no argument should be given.
-
-Arguments:
-
-* keys -- item keys (list[str] or maapi.Key )
-
-Returns:
-
-* list item (maagic.ListElement)
-
-
-
-
-
-delete(...)
-
-Method:
-
-```python
-delete(self)
-```
-
-Delete the entire list.
-
-
-
-
-
-exists(...)
-
-Method:
-
-```python
-exists(self, keys)
-```
-
-Check if list has an item matching 'keys'.
-
-Arguments:
-
-* keys -- item keys (list[str] or maapi.Key )
-
-Returns:
-
-* boolean
-
-
-
-
-
-filter(...)
-
-Method:
-
-```python
-filter(self, xpath_expr=None, secondary_index=None)
-```
-
-Return a filtered iterator for the list.
-
-With this method it is possible to filter the selection using an XPath
-expression and/or a secondary index. If supported by the data provider,
-filtering will be done there.
-
-Not available for in-memory lists.
-
-Keyword arguments:
-
-* xpath_expr -- a valid XPath expression for filtering or None
- (string, default: None) (optional)
-* secondary_index -- secondary index to use or None
- (string, default: None) (optional)
-
-Returns:
-
-* iterator (maagic.ListIterator)
-
-
-
-
-
-keys(...)
-
-Method:
-
-```python
-keys(self, xpath_expr=None, secondary_index=None)
-```
-
-Return all keys in the list.
-
-Note that this will immediately retrieve every key value from the CDB.
-For a long list this could be a time-consuming operation. The keys
-selection may be filtered using 'xpath_expr' and 'secondary_index'.
-
-Not available for in-memory lists.
-
-Keyword arguments:
-
-* xpath_expr -- a valid XPath expression for filtering or None
- (string, default: None) (optional)
-* secondary_index -- secondary index to use or None
- (string, default: None) (optional)
-
-
-
-
-
-move(...)
-
-Method:
-
-```python
-move(self, key, where, to=None)
-```
-
-Move the item with key 'key' in an ordered-by user list.
-
-The destination is given by the arguments 'where' and 'to'.
-
-Arguments:
-
-* key -- key of the element that is to be moved (str or maapi.Key)
-* where -- one of 'maapi.MOVE_BEFORE', 'maapi.MOVE_AFTER',
- 'maapi.MOVE_FIRST', or 'maapi.MOVE_LAST'
-
-Keyword arguments:
-
-* to -- key of the destination item for relative moves, only applicable
- if 'where' is either 'maapi.MOVE_BEFORE' or 'maapi.MOVE_AFTER'.
-
-
-
-### _class_ **ListElement**
-
-Represents a list element.
-
-This is a Container object with a specialized __repr__() method.
-
-```python
-ListElement(backend, cs_node, parent=None, children=None)
-```
-
-Initialize Container node. Should not be called explicitly.
-
-Members:
-
-
-
-delete(...)
-
-Method:
-
-```python
-delete(self)
-```
-
-Delete the container.
-
-Deletes all nodes inside the container. The container itself is not
-affected as it carries no state of its own.
-
-Example use:
-
- root.container.delete()
-
-
-
-### _class_ **ListIterator**
-
-List iterator.
-
-An instance of this class will be returned when iterating a list.
-
-```python
-ListIterator(lst, secondary_index=None, xpath_expr=None)
-```
-
-Initialize this object.
-
-An instance of this class will be created when iteration of a
-list starts. Should not be called explicitly.
-
-Members:
-
-
-
-delete(...)
-
-Method:
-
-```python
-delete(self)
-```
-
-Delete the iterator.
-
-
-
-
-
-next(...)
-
-Method:
-
-```python
-next(self)
-```
-
-Get the next value from the iterator.
-
-
-
-### _class_ **MaagicError**
-
-Exception type used within maagic.
-
-Members:
-
-
-
-add_note(...)
-
-Method:
-
-Exception.add_note(note) --
-add a note to the exception
-
-
-
-
-
-args
-
-
-
-
-
-
-with_traceback(...)
-
-Method:
-
-Exception.with_traceback(tb) --
-set self.__traceback__ to tb and return self.
-
-
-
-### _class_ **Node**
-
-Base class of all nodes in the configuration tree.
-
-Contains magic overrides that make children in the YANG tree appear as
-attributes of the Node object and as elements in the list 'self'.
-
-Attributes:
-
-* _name -- the YANG name of this node (str)
-* _path -- the keypath of this node in string form (HKeypathRef)
-* _parent -- the parent of this node, or None if this node
- has no parent (maagic.Node)
-* _cs_node -- the schema node of this node, or None if this node is not in
- the schema (maagic.Node)
-
-```python
-Node(backend, cs_node, parent=None, is_root=False)
-```
-
-Initialize a Node object. Should not be called explicitly.
-
-Members:
-
-_None_
-
-### _class_ **NonEmptyLeaf**
-
-Represents a leaf with a type other than "empty".
-
-```python
-NonEmptyLeaf(backend, cs_node, parent=None)
-```
-
-Initialize a NonEmptyLeaf node. Should not be called explicitly.
-
-Members:
-
-
-
-delete(...)
-
-Method:
-
-```python
-delete(self)
-```
-
-Delete this leaf from the data tree.
-
-
-
-
-
-exists(...)
-
-Method:
-
-```python
-exists(self)
-```
-
-Check if leaf exists.
-
-Return True if this leaf exists (has a value) in the data tree.
-
-
-
-
-
-get_value(...)
-
-Method:
-
-```python
-get_value(self)
-```
-
-Return the value of this leaf.
-
-The value is returned as the most appropriate python data type.
-
-
-
-
-
-get_value_object(...)
-
-Method:
-
-```python
-get_value_object(self)
-```
-
-Return the value of this leaf as a Value object.
-
-
-
-
-
-set_cache(...)
-
-Method:
-
-```python
-set_cache(self, value)
-```
-
-Set the cached value of this leaf without updating the data tree.
-
-Use of this method is strongly discouraged.
-
-
-
-
-
-set_value(...)
-
-Method:
-
-```python
-set_value(self, value)
-```
-
-Set the value of this leaf.
-
-Arguments:
-
-* value -- the value to be set. If 'value' is not a Value object,
- it will be converted to one using Value.str2val.
-
-
-
-
-
-update_cache(...)
-
-Method:
-
-```python
-update_cache(self, force=False)
-```
-
-Read this leaf's value from the data tree and store it in the cache.
-
-There is no need to call this method explicitly.
-
-
-
-### _class_ **PresenceContainer**
-
-Represents a presence container.
-
-```python
-PresenceContainer(backend, cs_node, parent=None)
-```
-
-Initialize a PresenceContainer. Should not be called explicitly.
-
-Members:
-
-
-
-create(...)
-
-Method:
-
-```python
-create(self)
-```
-
-Create and return this presence container in the data tree.
-
-Example use:
-
- pc = root.container.presence_container.create()
-
-
-
-
-
-delete(...)
-
-Method:
-
-```python
-delete(self)
-```
-
-Delete this presence container from the data tree.
-
-Example use:
-
- root.container.presence_container.delete()
-
-
-
-
-
-exists(...)
-
-Method:
-
-```python
-exists(self)
-```
-
-Return true if the presence container exists in the data tree.
-
-Example use:
-
- root.container.presence_container.exists()
-
-
-
-### _class_ **Root**
-
-Represents the root node in the configuration tree.
-
-The root node is not represented in the schema, it is added for convenience
-and can contain the top level nodes from any number of namespaces as
-children.
-
-```python
-Root(backend=None, namespaces=None)
-```
-
-Initialize a Root node.
-
-Should not be called explicitly. Instead, use the function
-'get_root()'.
-
-Arguments:
-
-* backend -- backend to use, or 'None' for an in-memory tree
- (maapi.Maapi or maapi.Transaction)
-* namespaces -- which namespaces to include in the tree (list)
-
-Members:
-
-_None_
-
-## Predefined Values
-
-```python
-
-NODE_NAME_FULL = 0
-NODE_NAME_PY_FULL = 2
-NODE_NAME_PY_SHORT = 3
-NODE_NAME_SHORT = 1
-```
diff --git a/developer-reference/pyapi/ncs.maapi.md b/developer-reference/pyapi/ncs.maapi.md
deleted file mode 100644
index a355f86f..00000000
--- a/developer-reference/pyapi/ncs.maapi.md
+++ /dev/null
@@ -1,2870 +0,0 @@
-# Python ncs.maapi Module
-
-MAAPI high level module.
-
-This module defines a high level interface to the low-level maapi functions.
-
-The 'Maapi' class encapsulates a MAAPI connection which upon constructing,
-sets up a connection towards ConfD/NCS. An example of setting up a transaction
-and manipulating data:
-
- import ncs
-
- m = ncs.maapi.Maapi()
- m.start_user_session('admin', 'test_context')
- t = m.start_write_trans()
- t.get_elem('/model/data{one}/str')
- t.set_elem('testing', '/model/data{one}/str')
- t.apply()
-
-Another way is to use context managers, which will handle all cleanup
-related to transactions, user sessions and socket connections:
-
- with ncs.maapi.Maapi() as m:
- with ncs.maapi.Session(m, 'admin', 'test_context'):
- with m.start_write_trans() as t:
- t.get_elem('/model/data{one}/str')
- t.set_elem('testing', '/model/data{one}/str')
- t.apply()
-
-Finally, a really compact way of doing this:
-
- with ncs.maapi.single_write_trans('admin', 'test_context') as t:
- t.get_elem('/model/data{one}/str')
- t.set_elem('testing', '/model/data{one}/str')
- t.apply()
-
-## Functions
-
-### connect
-
-```python
-connect(ip='127.0.0.1', port=4569, path=None)
-```
-
-Convenience function for connecting to ConfD/NCS.
-
-The 'ip' and 'port' arguments are ignored if path is specified.
-
-Arguments:
-
-* ip -- ConfD/NCS instance ip address (str)
-* port -- ConfD/NCS instance port (int)
-* path -- ConfD/NCS instance location path (str)
-
-Returns:
-
-* socket (Python socket)
-
-### retry_on_conflict
-
-```python
-retry_on_conflict(retries=10, log=None)
-```
-
-Function/method decorator to retry a transaction in case of conflicts.
-
-When executing multiple concurrent transactions against the NCS RUNNING
-datastore, read-write conflicts are resolved by rejecting transactions
-having potentially stale data with ERR_TRANSACTION_CONFLICT.
-
-This decorator restarts a function, should it run into a conflict, giving
-it multiple attempts to apply. The decorated function must start its own
-transaction because a conflicting transaction must be thrown away entirely
-and a new one started.
-
-Example usage:
-
- @retry_on_conflict()
- def do_work():
- with ncs.maapi.single_write_trans('admin', 'python') as t:
- root = ncs.maagic.get_root(t)
- root.some_value = str(root.some_other_value)
- t.apply()
-
-Arguments:
-
-* retries -- number of times to retry (int)
-* log -- optional log object for logging conflict details
-
-### single_read_trans
-
-```python
-single_read_trans(user, context, groups=[], db=2, ip='127.0.0.1', port=4569, path=None, src_ip='127.0.0.1', src_port=0, proto=1, vendor=None, product=None, version=None, client_id=None, load_schemas=True, flags=0)
-```
-
-Context manager for a single READ transaction.
-
-This function connects to ConfD/NCS, starts a user session and finally
-starts a new READ transaction.
-
-Function signature:
-
- def single_read_trans(user, context, groups=[],
- db=RUNNING, ip=,
- port=, path=None,
- src_ip=, src_port=0,
- proto=PROTO_TCP,
- vendor=None, product=None, version=None,
- client_id=_mk_client_id(),
- load_schemas=LOAD_SCHEMAS_LOAD, flags=0):
-
-For argument db, flags see Maapi.start_trans(). For arguments user,
-context, groups, src_ip, src_port, proto, vendor, product, version and
-client_id see Maapi.start_user_session().
-For arguments ip, port and path see connect().
-For argument load_schemas see __init__().
-
-Arguments:
-
-* user - username (str)
-* context - context for the session (str)
-* groups - groups (list)
-* db -- database (int)
-* ip -- ConfD/NCS instance ip address (str)
-* port -- ConfD/NCS instance port (int)
-* path -- ConfD/NCS instance location path (str)
-* src_ip - source ip address (str)
-* src_port - source port (int)
-* proto - protocol used by for connecting (i.e. ncs.PROTO_TCP)
-* vendor -- lock error information (str, optional)
-* product -- lock error information (str, optional)
-* version -- lock error information (str, optional)
-* client_id -- lock error information (str, optional)
-* load_schemas - passed on to Maapi.__init__()
-* flags -- additional transaction flags (int)
-
-Returns:
-
-* read transaction object (maapi.Transaction)
-
-### single_write_trans
-
-```python
-single_write_trans(user, context, groups=[], db=2, ip='127.0.0.1', port=4569, path=None, src_ip='127.0.0.1', src_port=0, proto=1, vendor=None, product=None, version=None, client_id=None, load_schemas=True, flags=0)
-```
-
-Context manager for a single READ/WRITE transaction.
-
-This function connects to ConfD/NCS, starts a user session and finally
-starts a new READ/WRITE transaction.
-
-Function signature:
-
- def single_write_trans(user, context, groups=[],
- db=RUNNING, ip=,
- port=, path=None,
- src_ip=, src_port=0,
- proto=PROTO_TCP,
- vendor=None, product=None, version=None,
- client_id=_mk_client_id(),
- load_schemas=LOAD_SCHEMAS_LOAD, flags=0):
-
-For argument db, flags see Maapi.start_trans(). For arguments user,
-context, groups, src_ip, src_port, proto, vendor, product, version and
-client_id see Maapi.start_user_session().
-For arguments ip, port and path see connect().
-For argument load_schemas see __init__().
-
-Arguments:
-
-* user - username (str)
-* context - context for the session (str)
-* groups - groups (list)
-* db -- database (int)
-* ip -- ConfD/NCS instance ip address (str)
-* port -- ConfD/NCS instance port (int)
-* path -- ConfD/NCS instance location path (str)
-* src_ip - source ip address (str)
-* src_port - source port (int)
-* proto - protocol used by the client for connecting (int)
-* vendor -- lock error information (str, optional)
-* product -- lock error information (str, optional)
-* version -- lock error information (str, optional)
-* client_id -- lock error information (str, optional)
-* load_schemas - passed on to Maapi.__init__()
-* flags -- additional transaction flags (int)
-
-Returns:
-
-* write transaction object (maapi.Transaction)
-
-
-## Classes
-
-### _class_ **CommitParams**
-
-Class representing NSO commit parameters.
-
-Start with creating an empty instance of this class and set commit
-parameters using helper methods.
-
-```python
-CommitParams(result=None)
-```
-
-Members:
-
-
-
-comment(...)
-
-Method:
-
-```python
-comment(self, comment)
-```
-
-Set comment.
-
-
-
-
-
-commit_queue_async(...)
-
-Method:
-
-```python
-commit_queue_async(self)
-```
-
-Set commit queue asynchronous mode of operation.
-
-
-
-
-
-commit_queue_atomic(...)
-
-Method:
-
-```python
-commit_queue_atomic(self)
-```
-
-Make the commit queue item atomic.
-
-
-
-
-
-commit_queue_block_others(...)
-
-Method:
-
-```python
-commit_queue_block_others(self)
-```
-
-Make the commit queue item block other commit queue items for
-this device.
-
-
-
-
-
-commit_queue_bypass(...)
-
-Method:
-
-```python
-commit_queue_bypass(self)
-```
-
-Make the commit transactional even if commit queue is
-configured by default.
-
-
-
-
-
-commit_queue_error_option(...)
-
-Method:
-
-```python
-commit_queue_error_option(self, error_option)
-```
-
-Set commit queue item behaviour on error.
-
-
-
-
-
-commit_queue_lock(...)
-
-Method:
-
-```python
-commit_queue_lock(self)
-```
-
-Make the commit queue item locked.
-
-
-
-
-
-commit_queue_non_atomic(...)
-
-Method:
-
-```python
-commit_queue_non_atomic(self)
-```
-
-Make the commit queue item non-atomic.
-
-
-
-
-
-commit_queue_sync(...)
-
-Method:
-
-```python
-commit_queue_sync(self, timeout=None)
-```
-
-Set commit queue synchronous mode of operation.
-
-
-
-
-
-commit_queue_tag(...)
-
-Method:
-
-```python
-commit_queue_tag(self, tag)
-```
-
-Set commit-queue tag. Implicitly enabled commit queue commit.
-
-This function is deprecated and will be removed in a future release.
-Use label() instead.
-
-
-
-
-
-confirm_network_state(...)
-
-Method:
-
-```python
-confirm_network_state(self)
-```
-
-Check that the parts of the device configuration read and/or
-modified are up-to-date in CDB before pushing the configuration
-change to the device.
-
-
-
-
-
-confirm_network_state_re_evaluate_policies(...)
-
-Method:
-
-```python
-confirm_network_state_re_evaluate_policies(self)
-```
-
-Check that the parts of the device configuration read and/or
-modified are up-to-date in CDB before pushing the configuration
-change to the device and re-evaluate policies of effected
-services.
-
-
-
-
-
-dry_run_cli(...)
-
-Method:
-
-```python
-dry_run_cli(self)
-```
-
-Dry-run commit outformat CLI.
-
-
-
-
-
-dry_run_cli_c(...)
-
-Method:
-
-```python
-dry_run_cli_c(self)
-```
-
-Dry-run commit outformat cli-c.
-
-
-
-
-
-dry_run_cli_c_reverse(...)
-
-Method:
-
-```python
-dry_run_cli_c_reverse(self)
-```
-
-Dry-run commit outformat cli-c reverse.
-
-
-
-
-
-dry_run_native(...)
-
-Method:
-
-```python
-dry_run_native(self)
-```
-
-Dry-run commit outformat native.
-
-
-
-
-
-dry_run_native_reverse(...)
-
-Method:
-
-```python
-dry_run_native_reverse(self)
-```
-
-Dry-run commit outformat native reverse.
-
-
-
-
-
-dry_run_xml(...)
-
-Method:
-
-```python
-dry_run_xml(self)
-```
-
-Dry-run commit outformat XML.
-
-
-
-
-
-get_comment(...)
-
-Method:
-
-```python
-get_comment(self)
-```
-
-Get comment.
-
-
-
-
-
-get_commit_queue_error_option(...)
-
-Method:
-
-```python
-get_commit_queue_error_option(self)
-```
-
-Get commit queue item behaviour on error.
-
-
-
-
-
-get_commit_queue_sync_timeout(...)
-
-Method:
-
-```python
-get_commit_queue_sync_timeout(self)
-```
-
-Get commit queue synchronous mode of operation timeout.
-
-
-
-
-
-get_commit_queue_tag(...)
-
-Method:
-
-```python
-get_commit_queue_tag(self)
-```
-
-Get commit-queue tag.
-
-This function is deprecated and will be removed in a future release.
-
-
-
-
-
-get_dry_run_outformat(...)
-
-Method:
-
-```python
-get_dry_run_outformat(self)
-```
-
-Get dry-run outformat
-
-
-
-
-
-get_label(...)
-
-Method:
-
-```python
-get_label(self)
-```
-
-Get label.
-
-
-
-
-
-get_no_overwrite_scope(...)
-
-Method:
-
-```python
-get_no_overwrite_scope(self)
-```
-
-Get no-overwrite scope
-
-
-
-
-
-get_trace_id(...)
-
-Method:
-
-```python
-get_trace_id(self)
-```
-
-Get trace id.
-
-
-
-
-
-is_commit_queue_async(...)
-
-Method:
-
-```python
-is_commit_queue_async(self)
-```
-
-Get commit queue asynchronous mode of operation.
-
-
-
-
-
-is_commit_queue_atomic(...)
-
-Method:
-
-```python
-is_commit_queue_atomic(self)
-```
-
-Check if the commit queue item should be atomic.
-
-
-
-
-
-is_commit_queue_block_others(...)
-
-Method:
-
-```python
-is_commit_queue_block_others(self)
-```
-
-Check if the the commit queue item should block other commit
-queue items for this device.
-
-
-
-
-
-is_commit_queue_bypass(...)
-
-Method:
-
-```python
-is_commit_queue_bypass(self)
-```
-
-Check if the commit is transactional even if commit queue is
-configured by default.
-
-
-
-
-
-is_commit_queue_lock(...)
-
-Method:
-
-```python
-is_commit_queue_lock(self)
-```
-
-Check if the commit queue item should be locked.
-
-
-
-
-
-is_commit_queue_non_atomic(...)
-
-Method:
-
-```python
-is_commit_queue_non_atomic(self)
-```
-
-Check if the commit queue item should be non-atomic.
-
-
-
-
-
-is_commit_queue_sync(...)
-
-Method:
-
-```python
-is_commit_queue_sync(self)
-```
-
-Get commit queue synchronous mode of operation.
-
-
-
-
-
-is_confirm_network_state(...)
-
-Method:
-
-```python
-is_confirm_network_state(self)
-```
-
-Should a check be done that the parts of the device configuration
-read and/or modified are up-to-date in CDB before pushing the
-configuration change to the device.
-
-
-
-
-
-is_confirm_network_state_re_evaluate_policies(...)
-
-Method:
-
-```python
-is_confirm_network_state_re_evaluate_policies(self)
-```
-
-Is confirm-network-state with re-evaluate-policies enabled.
-
-
-
-
-
-is_dry_run(...)
-
-Method:
-
-```python
-is_dry_run(self)
-```
-
-Is dry-run enabled
-
-
-
-
-
-is_dry_run_reverse(...)
-
-Method:
-
-```python
-is_dry_run_reverse(self)
-```
-
-Is dry-run reverse enabled.
-
-
-
-
-
-is_no_deploy(...)
-
-Method:
-
-```python
-is_no_deploy(self)
-```
-
-Should service create method be invoked or not.
-
-
-
-
-
-is_no_lsa(...)
-
-Method:
-
-```python
-is_no_lsa(self)
-```
-
-Get no-lsa commit parameter.
-
-
-
-
-
-is_no_networking(...)
-
-Method:
-
-```python
-is_no_networking(self)
-```
-
-Check if the the configuration should only be written to CDB and
-not actually pushed to the device.
-
-
-
-
-
-is_no_out_of_sync_check(...)
-
-Method:
-
-```python
-is_no_out_of_sync_check(self)
-```
-
-Do not check device sync state before pushing the configuration
-change.
-
-
-
-
-
-is_no_overwrite(...)
-
-Method:
-
-```python
-is_no_overwrite(self)
-```
-
-Should a check be done that the parts of the device configuration
-to be modified are up-to-date in CDB before pushing the
-configuration change to the device.
-
-
-
-
-
-is_no_revision_drop(...)
-
-Method:
-
-```python
-is_no_revision_drop(self)
-```
-
-Get no-revision-drop commit parameter.
-
-
-
-
-
-is_reconcile_attach_non_service_config(...)
-
-Method:
-
-```python
-is_reconcile_attach_non_service_config(self)
-```
-
-Get reconcile commit parameter with attach-non-service-config
-behaviour.
-
-
-
-
-
-is_reconcile_detach_non_service_config(...)
-
-Method:
-
-```python
-is_reconcile_detach_non_service_config(self)
-```
-
-Get reconcile commit parameter with detach-non-service-config
-behaviour.
-
-
-
-
-
-is_reconcile_discard_non_service_config(...)
-
-Method:
-
-```python
-is_reconcile_discard_non_service_config(self)
-```
-
-Get reconcile commit parameter with discard-non-service-config
-behaviour.
-
-
-
-
-
-is_reconcile_keep_non_service_config(...)
-
-Method:
-
-```python
-is_reconcile_keep_non_service_config(self)
-```
-
-Get reconcile commit parameter with keep-non-service-config
-behaviour.
-
-
-
-
-
-is_use_lsa(...)
-
-Method:
-
-```python
-is_use_lsa(self)
-```
-
-Get use-lsa commit parameter.
-
-
-
-
-
-is_with_service_meta_data(...)
-
-Method:
-
-```python
-is_with_service_meta_data(self)
-```
-
-Get with-service-meta-data commit parameter.
-
-
-
-
-
-label(...)
-
-Method:
-
-```python
-label(self, label)
-```
-
-Set label.
-
-
-
-
-
-no_deploy(...)
-
-Method:
-
-```python
-no_deploy(self)
-```
-
-Do not invoke service's create method.
-
-
-
-
-
-no_lsa(...)
-
-Method:
-
-```python
-no_lsa(self)
-```
-
-Set no-lsa commit parameter.
-
-
-
-
-
-no_networking(...)
-
-Method:
-
-```python
-no_networking(self)
-```
-
-Only write the configuration to CDB, do not actually push it to
-the device.
-
-
-
-
-
-no_out_of_sync_check(...)
-
-Method:
-
-```python
-no_out_of_sync_check(self)
-```
-
-Do not check device sync state before pushing the configuration
-change.
-
-
-
-
-
-no_overwrite(...)
-
-Method:
-
-```python
-no_overwrite(self, scope)
-```
-
-Check that the parts of the device configuration to be modified
-are up-to-date in CDB before pushing the configuration change to the
-device.
-
-
-
-
-
-no_revision_drop(...)
-
-Method:
-
-```python
-no_revision_drop(self)
-```
-
-Set no-revision-drop commit parameter.
-
-
-
-
-
-reconcile_attach_non_service_config(...)
-
-Method:
-
-```python
-reconcile_attach_non_service_config(self)
-```
-
-Set reconcile commit parameter with attach-non-service-config
-behaviour.
-
-
-
-
-
-reconcile_detach_non_service_config(...)
-
-Method:
-
-```python
-reconcile_detach_non_service_config(self)
-```
-
-Set reconcile commit parameter with detach-non-service-config
-behaviour.
-
-
-
-
-
-reconcile_discard_non_service_config(...)
-
-Method:
-
-```python
-reconcile_discard_non_service_config(self)
-```
-
-Set reconcile commit parameter with discard-non-service-config
-behaviour.
-
-
-
-
-
-reconcile_keep_non_service_config(...)
-
-Method:
-
-```python
-reconcile_keep_non_service_config(self)
-```
-
-Set reconcile commit parameter with keep-non-service-config
-behaviour.
-
-
-
-
-
-set_dry_run_outformat(...)
-
-Method:
-
-```python
-set_dry_run_outformat(self, outformat)
-```
-
-Set dry-run outformat
-
-
-
-
-
-trace_id(...)
-
-Method:
-
-```python
-trace_id(self, trace_id)
-```
-
-Set trace id.
-
-
-
-
-
-use_lsa(...)
-
-Method:
-
-```python
-use_lsa(self)
-```
-
-Set use-lsa commit parameter.
-
-
-
-
-
-with_service_meta_data(...)
-
-Method:
-
-```python
-with_service_meta_data(self)
-```
-
-Set with-service-meta-data commit parameter.
-
-
-
-### _class_ **DryRunOutformat**
-
-Enumeration for dry run formats:
-XML = 1
-CLI = 2
-NATIVE = 3
-CLI_C = 4
-
-```python
-DryRunOutformat(*values)
-```
-
-Members:
-
-
-
-CLI
-
-```python
-CLI = 2
-```
-
-
-
-
-
-
-CLI_C
-
-```python
-CLI_C = 4
-```
-
-
-
-
-
-
-NATIVE
-
-```python
-NATIVE = 3
-```
-
-
-
-
-
-
-XML
-
-```python
-XML = 1
-```
-
-
-
-
-
-
-name
-
-The name of the Enum member.
-
-
-
-
-
-value
-
-The value of the Enum member.
-
-
-
-### _class_ **Key**
-
-Key string encapsulation and helper.
-
-```python
-Key(key, enum_cs_nodes=None)
-```
-
-Initialize a key.
-
-'key' may be a string or a list of strings.
-
-Members:
-
-_None_
-
-### _class_ **Maapi**
-
-Class encapsulating a MAAPI connection.
-
-```python
-Maapi(ip='127.0.0.1', port=4569, path=None, load_schemas=True, msock=None)
-```
-
-Create a Maapi instance.
-
-Arguments:
-
-* ip -- ConfD/NCS instance ip address (str, optional)
-* port -- ConfD/NCS instance port (int, optional)
-* path -- ConfD/NCS instance location path (str, optional)
-* msock -- already connected MAAPI socket (socket.socket, optional)
- (ip, port and path ignored)
-* load_schemas -- whether schemas should be loaded/reloaded or not
- LOAD_SCHEMAS_LOAD = load schemas unless already loaded
- LOAD_SCHEMAS_SKIP = do not load schemas
- LOAD_SCHEMAS_RELOAD = force reload of schemas
-
-The option LOAD_SCHEMAS_RELOAD can be used to force a reload of
-schemas, for example when connecting to a different ConfD/NSO node.
-Note that previously constructed maagic objects will be invalid and
-using them will lead to undefined behavior. Use this option with care,
-for example in a small script querying a list of running nodes.
-
-Members:
-
-
-
-apply_template(...)
-
-Method:
-
-```python
-apply_template(self, th, name, path, vars=None, flags=0)
-```
-
-Apply a template.
-
-
-
-
-
-attach(...)
-
-Method:
-
-```python
-attach(self, ctx_or_th, hashed_ns=0, usid=0)
-```
-
-Attach to an existing transaction.
-
-'ctx_or_th' may be either a TransCtxRef or a transaction handle.
-The 'hashed_ns' argument is basically just there to save a call to
-set_namespace(). 'usid' is only used if 'ctx_or_th' is a transaction
-handle and if set to 0 the user session id that is the owner of the
-transaction will be used.
-
-Arguments:
-
-* ctx_or_th (TransCtxRef or transaction handle)
-* hashed_ns (int)
-* usid (int)
-
-Returns:
-
-* transaction object (maapi.Transaction)
-
-
-
-
-
-attach_init(...)
-
-Method:
-
-```python
-attach_init(self)
-```
-
-Attach to phase0 for CDB initialization and upgrade.
-
-
-
-
-
-authenticate(...)
-
-Method:
-
-```python
-authenticate(self, user, password, n, src_addr=None, src_port=None, context=None, prot=None)
-```
-
-Authenticate a user using the AAA configuration.
-
-Use src_addr, src_port, context and prot to use an external
-authentication executable.
-Use the 'n' to get a list of n-1 groups that the user is a member of.
-Use n=1 if the function is used in a context where the group names
-are not needed.
-
-Returns 1 if accepted without groups. If the authentication failed
-or was accepted a tuple with first element status code, 0 for
-rejection and 1 for accepted is returned. The second element either
-contains the reason for the rejection as a string OR a list groupnames.
-
-Arguments:
-
-* user - username (str)
-* password - passwor d (str)
-* n - number of groups to return (int)
-* src_addr - source ip address (str)
-* src_port - source port (int)
-* context - context for the session (str)
-* prot - protocol used by the client for connecting (int)
-
-Returns:
-
-* status (int or tuple)
-
-
-
-
-
-close(...)
-
-Method:
-
-```python
-close(self)
-```
-
-Ends session and closes socket.
-
-
-
-
-
-cursor(...)
-
-Method:
-
-```python
-cursor(self, th, path, enum_cs_nodes=None, want_values=False, secondary_index=None, xpath_expr=None)
-```
-
-Get an iterable list cursor.
-
-
-
-
-
-destroy_cursor(...)
-
-Method:
-
-```python
-destroy_cursor(self, mc)
-```
-
-Destroy cursor.
-
-Arguments:
-
-* cursor (maapi.Cursor)
-
-
-
-
-
-detach(...)
-
-Method:
-
-```python
-detach(self, ctx_or_th)
-```
-
-Detach the underlying MAAPI socket.
-
-Arguments:
-
-* ctx_or_th (TransCtxRef or transaction handle)
-
-
-
-
-
-do_display(...)
-
-Method:
-
-```python
-do_display(self, th, path)
-```
-
-Do display.
-
-If the data model uses the YANG when or tailf:display-when
-statement, this function can be used to determine if the item
-given by the path should be displayed or not.
-
-Arguments:
-
-* th -- transaction handle
-* path -- path to the 'display-when' statement (str)
-
-Returns
-
-* boolean
-
-
-
-
-
-end_progress_span(...)
-
-Method:
-
-```python
-end_progress_span(self, *args)
-```
-
-Don't call this function.
-
-Call instance.end() on the progress.Span instance created from
-start_progress_span() instead.
-
-
-
-
-
-exists(...)
-
-Method:
-
-```python
-exists(self, th, path)
-```
-
-Check if path exists.
-
-Arguments:
-
-* th -- transaction handle
-* path -- path to the node in the data tree (str)
-
-Returns:
-
-* boolean
-
-
-
-
-
-find_next(...)
-
-Method:
-
-```python
-find_next(self, mc, type, inkeys)
-```
-
-Find next.
-
-Update the cursor 'mc' with the key(s) for the list entry designated
-by the 'type' and 'inkeys' arguments. This function may be used to
-start a traversal from an arbitrary entry in a list. Keys for
-subsequent entries may be retrieved with the get_next() function.
-When no more keys are found, False is returned.
-
-The strategy to use is defined by 'type':
-
- FIND_NEXT - The keys for the first list entry after the one
- indicated by the 'inkeys' argument.
- FIND_SAME_OR_NEXT - If the values in the 'inkeys' array completely
- identifies an actual existing list entry, the keys for
- this entry are requested. Otherwise the same logic as
- for FIND_NEXT above.
-
-
-
-
-
-get_next(...)
-
-Method:
-
-```python
-get_next(self, mc)
-```
-
-Iterate and get the keys for the next entry in a list.
-
-When no more keys are found, False is returned
-
-Arguments:
-
-* cursor (maapi.Cursor)
-
-Returns:
-
-* keys (list or boolean)
-
-
-
-
-
-get_objects(...)
-
-Method:
-
-```python
-get_objects(self, mc, n, nobj)
-```
-
-Get objects.
-
-Read at most n values from each nobj lists starting at cursor mc.
-Returns a list of Value's.
-
-Arguments:
-
-* mc (maapi.Cursor)
-* n -- at most n values will be read (int)
-* nobj -- number of nobj lists which n elements will be taken from (int)
-
-Returns:
-
-* list of values (list)
-
-
-
-
-
-get_running_db_status(...)
-
-Method:
-
-```python
-get_running_db_status(self)
-```
-
-Get running db status.
-
-Gets the status of the running db. Returns True if consistent and
-False otherwise.
-
-Returns:
-
-* boolean
-
-
-
-
-
-ip
-
-_Readonly property_
-
-Return address to connect to the IPC port
-
-
-
-
-
-load_schemas(...)
-
-Method:
-
-```python
-load_schemas(self, use_maapi_socket=False)
-```
-
-Load the schemas to Python (using shared memory if enabled).
-
-If 'use_maapi_socket' is set to True, the schmeas are loaded through
-the NSO daemon via a MAAPI socket.
-
-
-
-
-
-netconf_ssh_call_home(...)
-
-Method:
-
-```python
-netconf_ssh_call_home(self, host, port=4334)
-```
-
-Initiate NETCONF SSH Call Home.
-
-
-
-
-
-netconf_ssh_call_home_opaque(...)
-
-Method:
-
-```python
-netconf_ssh_call_home_opaque(self, host, opaque, port=4334)
-```
-
-Initiate NETCONF SSH Call Home w. opaque data.
-
-
-
-
-
-path
-
-_Readonly property_
-
-Return path to connect to the IPC port
-
-
-
-
-
-port
-
-_Readonly property_
-
-Return port to connect to the IPC port
-
-
-
-
-
-progress_info(...)
-
-Method:
-
-```python
-progress_info(self, msg, verbosity=0, attrs=None, links=None, path=None)
-```
-
-While spans represents a pair of data points: start and stop; info
-events are instead singular events, one point in time. Call
-progress_info() to write a progress span info event to the progress
-trace. The info event will have the same span-id as the start and stop
-events of the currently ongoing progress span in the active user session
-or transaction. See help for start_progress_span() for more information.
-
-Arguments:
-
-* msg - message to report (str)
-* verbosity - ncs.VERBOSITY_*, VERBOSITY_NORMAL is default (optional)
-* attrs - user defined attributes (optional)
-* links - list of ncs.progress.Span or dict (optional)
-* path - keypath to an action/leaf/service/etc (str, optional)
-
-
-
-
-
-query_free_result(...)
-
-Method:
-
-```python
-query_free_result(self, qrs)
-```
-
-Deallocate QueryResult memory.
-
-Deallocated memory inside the QueryResult object 'qrs' returned from
-query_result(). It is not necessary to call this method as deallocation
-will be done when the Python library garbage collects the QueryResult
-object.
-
-Arguments:
-
-* qrs -- the query result structure to free
-
-
-
-
-
-report_progress(...)
-
-Method:
-
-```python
-report_progress(self, th, verbosity, msg, package=None)
-```
-
-Report transaction/action progress.
-
-The 'package' argument is only available to NCS.
-
-This function is deprecated and will be removed in a future release.
-Use progress_info() instead.
-
-
-
-
-
-report_progress_start(...)
-
-Method:
-
-```python
-report_progress_start(self, th, verbosity, msg, package=None)
-```
-
-Report transaction/action progress.
-
-Used for calculation of the duration between two events. The method
-returns a _Progress object to be passed to report_progress_stop()
-once the event has finished.
-
-The 'package' argument is only available to NCS.
-
-This function is deprecated and will be removed in a future release.
-Use start_progress_span() instead.
-
-
-
-
-
-report_progress_stop(...)
-
-Method:
-
-```python
-report_progress_stop(self, th, progress, annotation=None)
-```
-
-Report transaction/action progress.
-
-Used for calculation of the duration between two events. The method
-takes a _Progress object returned from report_progress_start().
-
-This function is deprecated and will be removed in a future release.
-Use end_progress_span() instead.
-
-
-
-
-
-report_service_progress(...)
-
-Method:
-
-```python
-report_service_progress(self, th, verbosity, msg, path, package=None)
-```
-
-Report transaction progress for a FASTMAP service.
-
-This function is deprecated and will be removed in a future release.
-Use progress_info() instead.
-
-
-
-
-
-report_service_progress_start(...)
-
-Method:
-
-```python
-report_service_progress_start(self, th, verbosity, msg, path, package=None)
-```
-
-Report transaction progress for a FASTMAP service.
-
-Used for calculation of the duration between two events. The method
-returns a _Progress object to be passed to
-report_service_progress_stop() once the event has finished.
-
-This function is deprecated and will be removed in a future release.
-Use start_progress_span() instead.
-
-
-
-
-
-report_service_progress_stop(...)
-
-Method:
-
-```python
-report_service_progress_stop(self, th, progress, annotation=None)
-```
-
-Report transaction progress for a FASTMAP service.
-
-Used for calculation of the duration between two events. The method
-takes a _Progress object returned from report_service_progress_start().
-
-This function is deprecated and will be removed in a future release.
-Use end_progress_span() instead.
-
-
-
-
-
-run_with_retry(...)
-
-Method:
-
-```python
-run_with_retry(self, fun, max_num_retries=10, commit_params=None, usid=0, flags=0, vendor=None, product=None, version=None, client_id=None)
-```
-
-Run fun with a new read-write transaction against RUNNING.
-
-The transaction is applied if fun returns True. The fun is
-only retried in case of transaction conflicts. Each retry is
-run using a new transaction.
-
-The last conflict error.Error is thrown in case of max number of
-retries is reached.
-
-Arguments:
-
-* fun - work fun (fun(maapi.Transaction) -> bool)
-* usid - user id (int)
-* max_num_retries - maximum number of retries (int)
-
-Returns:
-
-* bool True if transation was applied, else False.
-
-
-
-
-
-safe_create(...)
-
-Method:
-
-```python
-safe_create(self, th, path)
-```
-
-Safe version of create.
-
-Create a new list entry, a presence container, or a leaf of
-type empty in the data tree - if it doesn't already exist.
-
-Arguments:
-
-* th -- transaction handle
-* path -- path to the new element (str)
-
-
-
-
-
-safe_delete(...)
-
-Method:
-
-```python
-safe_delete(self, th, path)
-```
-
-Safe version of delete.
-
-Delete an existing list entry, a presence container, or an
-optional leaf and all its children (if any) from the data
-tree. If it exists.
-
-Arguments:
-
-* th -- transaction handle
-* path -- path to the element (str)
-
-
-
-
-
-safe_get_elem(...)
-
-Method:
-
-```python
-safe_get_elem(self, th, path)
-```
-
-Safe version of get_elem.
-
-Read the element at 'path', returns 'None' if it doesn't
-exist.
-
-Arguments:
-
-* th -- transaction handle
-* path -- path to the element (str)
-
-Returns:
-
-* configuration element
-
-
-
-
-
-safe_get_object(...)
-
-Method:
-
-```python
-safe_get_object(self, th, n, path)
-```
-
-Safe version of get_object.
-
-This function reads at most 'n' values from the list entry or
-container specified by the 'path'. Returns 'None' the path is
-empty.
-
-Arguments:
-
-* th -- transaction handle
-* n -- at most n values (int)
-* path -- path to the object (str)
-
-Returns:
-
-* configuration object
-
-
-
-
-
-set_elem(...)
-
-Method:
-
-```python
-set_elem(self, th, value, path)
-```
-
-Set the node at 'path' to 'value'.
-
-If 'value' is not of type Value it will be converted to a string
-before calling set_elem2() under the hood.
-
-Arguments:
-
-* th -- transaction handle
-* value -- element value (Value or str)
-* path -- path to the element (str)
-
-
-
-
-
-shared_apply_template(...)
-
-Method:
-
-```python
-shared_apply_template(self, th, name, path, vars=None, flags=0)
-```
-
-FASTMAP version of apply_template().
-
-
-
-
-
-shared_copy_tree(...)
-
-Method:
-
-```python
-shared_copy_tree(self, th, from_path, to_path, flags=0)
-```
-
-FASTMAP version of copy_tree().
-
-
-
-
-
-shared_create(...)
-
-Method:
-
-```python
-shared_create(self, th, path, flags=0)
-```
-
-FASTMAP version of create().
-
-
-
-
-
-shared_insert(...)
-
-Method:
-
-```python
-shared_insert(self, th, path, flags=0)
-```
-
-FASTMAP version of insert().
-
-
-
-
-
-shared_set_elem(...)
-
-Method:
-
-```python
-shared_set_elem(self, th, value, path, flags=0)
-```
-
-FASTMAP version of set_elem().
-
-If 'value' is not of type Value it will be converted to a string
-before calling shared_set_elem2() under the hood.
-
-
-
-
-
-shared_set_values(...)
-
-Method:
-
-```python
-shared_set_values(self, th, values, path, flags=0)
-```
-
-FASTMAP version of set_values().
-
-
-
-
-
-start_progress_span(...)
-
-Method:
-
-```python
-start_progress_span(self, msg, verbosity=0, attrs=None, links=None, path=None)
-```
-
-Starts a progress span. Progress spans are trace messages written to
-the progress trace and the developer log. A progress span consists of a
-start and a stop event which can be used to calculate the duration
-between the two. Those events can be identified with unique span-ids.
-Inside the span it is possible to start new spans, which will then
-become child spans, the parent-span-id is set to the previous spans'
-span-id. A child span can be used to calculate the duration of a sub
-task, and is started from consecutive maapi_start_progress_span() calls,
-and is ended with maapi_end_progress_span().
-
-The concepts of traces, trace-id and spans are highly influenced by
-https://opentelemetry.io/docs/concepts/signals/traces/#spans
-
-
-Call help(ncs.progress) or help(confd.progress) for examples.
-
-Arguments:
-
-* msg - message to report (str)
-* verbosity - ncs.VERBOSITY_*, VERBOSITY_NORMAL is default (optional)
-* attrs - user defined attributes (optional)
-* links - list of ncs.progress.Span or dict (optional)
-* path - keypath to an action/leaf/service/etc (str, optional)
-
-Returns:
-
-* trace span (ncs.progress.Span)
-
-
-
-
-
-start_read_trans(...)
-
-Method:
-
-```python
-start_read_trans(self, db=2, usid=0, flags=0, vendor=None, product=None, version=None, client_id=None)
-```
-
-Start a read transaction.
-
-For details see start_trans().
-
-
-
-
-
-start_trans(...)
-
-Method:
-
-```python
-start_trans(self, rw, db=2, usid=0, flags=0, vendor=None, product=None, version=None, client_id=None)
-```
-
-Start a transaction towards the 'db'.
-
-This function starts a new a new transaction towards the given
-data store.
-
-Arguments:
-
-* rw -- Either READ or READ_WRITE flag (ncs)
-* db -- Either CANDIDATE, RUNNING or STARTUP flag (cdb)
-* usid -- user id (int)
-* flags -- additional transaction flags (int)
-* vendor -- lock error information (str, optional)
-* product -- lock error information (str, optional)
-* version -- lock error information (str, optional)
-* client_id -- lock error information (str, optional)
-
-Returns:
-
-* transaction (maapi.Transaction)
-
-Flags (maapi):
-
-* FLAG_HINT_BULK
-* FLAG_NO_DEFAULTS
-* FLAG_CONFIG_ONLY
-* FLAG_HIDE_INACTIVE
-* FLAG_DELAYED_WHEN
-* FLAG_NO_CONFIG_CACHE
-* FLAG_CONFIG_CACHE_ONLY
-* FLAG_HIDE_ALL_HIDEGROUPS
-* FLAG_SKIP_SUBSCRIBERS
-
-
-
-
-
-start_trans_in_trans(...)
-
-Method:
-
-```python
-start_trans_in_trans(self, th, readwrite, usid=0)
-```
-
-Start a new transaction within a transaction.
-
-This function makes it possible to start a transaction with another
-transaction as backend, instead of an actual data store. This can be
-useful if we want to make a set of related changes, and then either
-apply or discard them all based on some criterion, while other changes
-remain unaffected. The thandle identifies the backend transaction to
-use. If 'usid' is 0, the transaction will be started within the user
-session associated with the MAAPI socket, otherwise it will be started
-within the user session given by usid. If we call apply() on this
-"transaction in a transaction" object, the changes (if any) will be
-applied to the backend transaction. To discard the changes, call
-finish() without calling apply() first.
-
-Arguments:
-
-* th -- transaction handle
-* readwrite -- Either READ or READ_WRITE flag (ncs)
-* usid -- user id (int)
-
-Returns:
-
-* transaction (maapi.Transaction)
-
-
-
-
-
-start_user_session(...)
-
-Method:
-
-```python
-start_user_session(self, user, context, groups=[], src_ip='127.0.0.1', src_port=0, proto=1, vendor=None, product=None, version=None, client_id=None, path=None)
-```
-
-Start a new user session.
-
-This method gives some resonable defaults.
-
-Arguments:
-
-* user - username (str)
-* context - context for the session (str)
-* groups - groups (list)
-* src_ip - source ip address (str)
-* src_port - source port (int)
-* proto - protocol used by for connecting (i.e. ncs.PROTO_TCP)
-* vendor -- lock error information (str, optional)
-* product -- lock error information (str, optional)
-* version -- lock error information (str, optional)
-* client_id -- lock error information (str, optional)
-* path -- path to Unix-domain socket (only for NSO)
-
-Protocol flags (ncs):
-
-* PROTO_CONSOLE
-* PROTO_HTTP
-* PROTO_HTTPS
-* PROTO_SSH
-* PROTO_SSL
-* PROTO_SYSTEM
-* PROTO_TCP
-* PROTO_TLS
-* PROTO_TRACE
-* PROTO_UDP
-
-Example use:
-
- maapi.start_user_session(
- sock_maapi,
- 'admin',
- 'python',
- [],
- _ncs.ADDR,
- _ncs.PROTO_TCP)
-
-
-
-
-
-start_write_trans(...)
-
-Method:
-
-```python
-start_write_trans(self, db=2, usid=0, flags=0, vendor=None, product=None, version=None, client_id=None)
-```
-
-Start a write transaction.
-
-For details see start_trans().
-
-
-
-
-
-write_service_log_entry(...)
-
-Method:
-
-```python
-write_service_log_entry(self, path, msg, type, level)
-```
-
-Write service log entries.
-
-This function makes it possible to write service log entries from
-FASTMAP code.
-
-
-
-### _class_ **NoOverwriteScope**
-
-Enumeration for no-overwrite scopes:
-WRITE_SET_ONLY = 1
-WRITE_AND_FULL_READ_SET = 2
-WRITE_AND_SERVICE_READ_SET = 3
-
-```python
-NoOverwriteScope(*values)
-```
-
-Members:
-
-
-
-WRITE_AND_FULL_READ_SET
-
-```python
-WRITE_AND_FULL_READ_SET = 2
-```
-
-
-
-
-
-
-WRITE_AND_SERVICE_READ_SET
-
-```python
-WRITE_AND_SERVICE_READ_SET = 3
-```
-
-
-
-
-
-
-WRITE_SET_ONLY
-
-```python
-WRITE_SET_ONLY = 1
-```
-
-
-
-
-
-
-name
-
-The name of the Enum member.
-
-
-
-
-
-value
-
-The value of the Enum member.
-
-
-
-### _class_ **Session**
-
-Encapsulate a MAAPI user session.
-
-Context manager for user sessions. This class makes it easy to use
-a single Maapi connection and switch user session along the way.
-For example:
-
- with Maapi() as m:
- for user, context, device in devlist:
- with Session(m, user, context):
- with m.start_write_trans() as t:
- # ...
- # do something using the correct user session
- # ...
- t.apply()
-
-```python
-Session(maapi, user, context, groups=[], src_ip='127.0.0.1', src_port=0, proto=1, vendor=None, product=None, version=None, client_id=None, path=None)
-```
-
-Initialize a Session object via start_user_session().
-
-Arguments:
-
-* maapi -- maapi object (maapi.Maapi)
-* for all other arguments see start_user_session()
-
-Members:
-
-
-
-close(...)
-
-Method:
-
-```python
-close(self)
-```
-
-Close the user session.
-
-
-
-### _class_ **Transaction**
-
-Class that corresponds to a single MAAPI transaction.
-
-```python
-Transaction(maapi, th=None, rw=None, db=2, vendor=None, product=None, version=None, client_id=None)
-```
-
-Initialize a Transaction object.
-
-When created one may access the maapi and th arguments like this:
-
- trans = Transaction(mymaapi, th=myth)
- trans.maapi # the Maapi object
- trans.th # the transaction handle
-
-An instance of this class is also a context manager:
-
- with Transaction(mymaapi, th=myth) as trans:
- # do something here...
-
-When exiting the with statement, finish() will be called.
-
-If 'th' is left out (or None) a new transaction is started using
-the 'db' and 'rw' arguments, otherwise 'db' and 'rw' are ignored.
-
-Arguments:
-
-* maapi -- a Maapi object (maapi.Maapi)
-* th -- a transaction handle or None
-* rw -- Either READ or READ_WRITE flag (ncs)
-* db -- Either CANDIDATE, RUNNING or STARTUP flag (cdb)
-* vendor -- lock error information (optional)
-* product -- lock error information (optional)
-* version -- lock error information (optional)
-* client_id -- lock error information (optional)
-
-Members:
-
-
-
-abort(...)
-
-Method:
-
-```python
-abort(self)
-```
-
-Abort the transaction.
-
-
-
-
-
-apply(...)
-
-Method:
-
-```python
-apply(self, keep_open=True, flags=0)
-```
-
-Apply the transaction.
-
-Validates, prepares and eventually commits or aborts the
-transaction. If the validation fails and the 'keep_open'
-argument is set to True (default), the transaction is left
-open and the developer can react upon the validation errors.
-
-Arguments:
-
-* keep_open -- keep transaction open (boolean)
-* flags - additional transaction flags (int)
-
-Flags (maapi):
-
-* COMMIT_NCS_NO_REVISION_DROP
-* COMMIT_NCS_NO_DEPLOY
-* COMMIT_NCS_NO_NETWORKING
-* COMMIT_NCS_NO_OUT_OF_SYNC_CHECK
-* COMMIT_NCS_NO_OVERWRITE_WRITE_SET_ONLY
-* COMMIT_NCS_NO_OVERWRITE_WRITE_AND_FULL_READ_SET
-* COMMIT_NCS_NO_OVERWRITE_WRITE_AND_FULL_SERVICE_SET
-* COMMIT_NCS_USE_LSA
-* COMMIT_NCS_NO_LSA
-* COMMIT_NCS_RECONCILE_KEEP_NON_SERVICE_CONFIG
-* COMMIT_NCS_RECONCILE_DISCARD_NON_SERVICE_CONFIG
-* COMMIT_NCS_RECONCILE_ATTACH_NON_SERVICE_CONFIG
-* COMMIT_NCS_RECONCILE_DETACH_NON_SERVICE_CONFIG
-* COMMIT_NCS_CONFIRM_NETWORK_STATE
-* COMMIT_NCS_CONFIRM_NETWORK_STATE_RE_EVALUATE_POLICIES
-
-
-
-
-
-apply_params(...)
-
-Method:
-
-```python
-apply_params(self, keep_open=True, params=None)
-```
-
-Apply the transaction and return the result in form of dict().
-
-Validates, prepares and eventually commits or aborts the
-transaction. If the validation fails and the 'keep_open'
-argument is set to True (default), the transaction is left
-open and the developer can react upon the validation errors.
-
-The 'params' argument represent commit parameters. See CommitParams
-class for available commit parameters.
-
-The result is a dictionary representing the result of applying
-transaction. If dry-run was requested, then the resulting dictionary
-will have 'dry-run' key set along with the actual results. If commit
-through commit queue was requested, then the resulting dictionary
-will have 'commit-queue' key set. Otherwise the dictionary will
-be empty.
-
-Arguments:
-
-* keep_open -- keep transaction open (boolean)
-* params -- list of commit parameters (maapi.CommitParams)
-
-Returns:
-
-* dict (see above)
-
-Example use:
-
- with ncs.maapi.single_write_trans('admin', 'python') as t:
- root = ncs.maagic.get_root(t)
- dns_list = root.devices.device['ex1'].config.sys.dns.server
- dns_list.create('192.0.2.1')
- params = t.get_params()
- params.dry_run_native()
- result = t.apply_params(True, params)
- print(result['device']['ex1'])
- t.apply_params(True, t.get_params())
-
-
-
-
-
-commit(...)
-
-Method:
-
-```python
-commit(self)
-```
-
-Commit the transaction.
-
-
-
-
-
-end_progress_span(...)
-
-Method:
-
-```python
-end_progress_span(self, *args)
-```
-
-Don't call this function.
-
-Call instance.end() on the progress.Span instance created from
-start_progress_span() instead.
-
-
-
-
-
-finish(...)
-
-Method:
-
-```python
-finish(self)
-```
-
-Finish the transaction.
-
-This will finish the transaction. If the transaction is implemented
-by an external database, this will invoke the finish() callback.
-
-
-
-
-
-get_params(...)
-
-Method:
-
-```python
-get_params(self)
-```
-
-Get the current commit parameters for the transaction.
-
-The result is an instance of the CommitParams class.
-
-
-
-
-
-hide_group(...)
-
-Method:
-
-```python
-hide_group(self, group_name)
-```
-
-Do hide a hide group.
-
-Hide all nodes belonging to a hide group in a transaction that started
-with flag FLAG_HIDE_ALL_HIDEGROUPS.
-
-
-
-
-
-prepare(...)
-
-Method:
-
-```python
-prepare(self, flags=0)
-```
-
-Prepare transaction.
-
-This function must be called as first part of two-phase commit. After
-this function has been called, commit() or abort() must be called.
-
-It will invoke the prepare callback in all participants in the
-transaction. If all participants reply with OK, the second phase of
-the two-phase commit procedure is commenced.
-
-Arguments:
-
-* flags - additional transaction flags (int)
-
-Flags (maapi):
-
-* COMMIT_NCS_NO_REVISION_DROP
-* COMMIT_NCS_NO_DEPLOY
-* COMMIT_NCS_NO_NETWORKING
-* COMMIT_NCS_NO_OUT_OF_SYNC_CHECK
-* COMMIT_NCS_NO_OVERWRITE_WRITE_SET_ONLY
-* COMMIT_NCS_NO_OVERWRITE_WRITE_AND_FULL_READ_SET
-* COMMIT_NCS_NO_OVERWRITE_WRITE_AND_SERVICE_READ_SET
-* COMMIT_NCS_USE_LSA
-* COMMIT_NCS_NO_LSA
-* COMMIT_NCS_RECONCILE_KEEP_NON_SERVICE_CONFIG
-* COMMIT_NCS_RECONCILE_DISCARD_NON_SERVICE_CONFIG
-* COMMIT_NCS_RECONCILE_ATTACH_NON_SERVICE_CONFIG
-* COMMIT_NCS_RECONCILE_DETACH_NON_SERVICE_CONFIG
-* COMMIT_NCS_CONFIRM_NETWORK_STATE
-* COMMIT_NCS_CONFIRM_NETWORK_STATE_RE_EVALUATE_POLICIES
-
-
-
-
-
-progress_info(...)
-
-Method:
-
-```python
-progress_info(self, msg, verbosity=0, attrs=None, links=None, path=None)
-```
-
-While spans represents a pair of data points: start and stop; info
-events are instead singular events, one point in time. Call
-progress_info() to write a progress span info event to the progress
-trace. The info event will have the same span-id as the start and stop
-events of the currently ongoing progress span in the active user session
-or transaction. See help for start_progress_span() for more information.
-
-Arguments:
-
-* msg - message to report (str)
-* verbosity - ncs.VERBOSITY_*, VERBOSITY_NORMAL is default (optional)
-* attrs - user defined attributes (optional)
-* links - list of ncs.progress.Span or dict (optional)
-* path - keypath to an action/leaf/service/etc (str, optional)
-
-
-
-
-
-start_progress_span(...)
-
-Method:
-
-```python
-start_progress_span(self, msg, verbosity=0, attrs=None, links=None, path=None)
-```
-
-Starts a progress span. Progress spans are trace messages written to
-the progress trace and the developer log. A progress span consists of a
-start and a stop event which can be used to calculate the duration
-between the two. Those events can be identified with unique span-id.
-Inside the span it is possible to start new spans, which will then
-become child spans, the parent-span-id is set to the previous spans'
-span-id. A child span can be used to calculate the duration of a sub
-task, and is started from consecutive maapi_start_progress_span() calls,
-and is ended with maapi_end_progress_span().
-
-The function returns a Span object which either stops the span by
-invoking span.end() or by exiting a 'with' context. Messages are
-written to the progress trace which can be directed to a file, oper
-data or as notifications.
-
-Call help(ncs.progress) or help(confd.progress) for examples.
-
-Arguments:
-
-* msg - message to report (str)
-* verbosity - ncs.VERBOSITY_*, VERBOSITY_NORMAL is default (optional)
-* attrs - user defined attributes (optional)
-* links - list of ncs.progress.Span or dict (optional)
-* path - keypath to an action/leaf/service/etc (str, optional)
-
-Returns:
-
-* trace span (ncs.progress.Span)
-
-
-
-
-
-unhide_group(...)
-
-Method:
-
-```python
-unhide_group(self, group_name)
-```
-
-Do unhide a hide group.
-
-Unhide all nodes belonging to a hide group in a transaction that started
-with flag FLAG_HIDE_ALL_HIDEGROUPS.
-
-
-
-
-
-validate(...)
-
-Method:
-
-```python
-validate(self, unlock, forcevalidation=False)
-```
-
-Validate the transaction.
-
-This function validates all data written in the transaction. This
-includes all data model constraints and all defined semantic
-validation, i.e. user programs that have registered functions under
-validation points.
-
-If 'unlock' is True, the transaction is open for further editing even
-if validation succeeds. If 'unlock' is False and the function succeeds
-next function to be called MUST be prepare() or finish().
-
-'unlock = True' can be used to implement a 'validate' command which
-can be given in the middle of an editing session. The first thing that
-happens is that a lock is set. If 'unlock' == False, the lock is
-released on success. The lock is always released on failure.
-
-The 'forcevalidation' argument should normally be False. It has no
-effect for a transaction towards the running or startup data stores,
-validation is always performed. For a transaction towards the
-candidate data store, validation will not be done unless
-'forcevalidation' is True. Avoiding this validation is preferable if
-we are going to commit the candidate to running, since otherwise the
-validation will be done twice. However if we are implementing a
-'validate' command, we should give a True value for 'forcevalidation'.
-
-Arguments:
-
-* unlock (boolean)
-* forcevalidation (boolean)
-
-
-
-## Predefined Values
-
-```python
-
-CMD_KEEP_PIPE = 8
-CMD_NO_AAA = 4
-CMD_NO_FULLPATH = 1
-CMD_NO_HIDDEN = 2
-COMMIT_NCS_ASYNC_COMMIT_QUEUE = 256
-COMMIT_NCS_BYPASS_COMMIT_QUEUE = 64
-COMMIT_NCS_CONFIRM_NETWORK_STATE = 268435456
-COMMIT_NCS_CONFIRM_NETWORK_STATE_RE_EVALUATE_POLICIES = 536870912
-COMMIT_NCS_NO_DEPLOY = 8
-COMMIT_NCS_NO_FASTMAP = 8
-COMMIT_NCS_NO_LSA = 1048576
-COMMIT_NCS_NO_NETWORKING = 16
-COMMIT_NCS_NO_OUT_OF_SYNC_CHECK = 32
-COMMIT_NCS_NO_OVERWRITE = 1024
-COMMIT_NCS_NO_REVISION_DROP = 4
-COMMIT_NCS_RECONCILE_ATTACH_NON_SERVICE_CONFIG = 67108864
-COMMIT_NCS_RECONCILE_DETACH_NON_SERVICE_CONFIG = 134217728
-COMMIT_NCS_RECONCILE_DISCARD_NON_SERVICE_CONFIG = 33554432
-COMMIT_NCS_RECONCILE_KEEP_NON_SERVICE_CONFIG = 16777216
-COMMIT_NCS_SYNC_COMMIT_QUEUE = 512
-COMMIT_NCS_USE_LSA = 524288
-CONFIG_AUTOCOMMIT = 8192
-CONFIG_C = 4
-CONFIG_CDB_ONLY = 4194304
-CONFIG_CONTINUE_ON_ERROR = 16384
-CONFIG_C_IOS = 32
-CONFIG_HIDE_ALL = 2048
-CONFIG_J = 2
-CONFIG_JSON = 131072
-CONFIG_MERGE = 64
-CONFIG_NO_BACKQUOTE = 2097152
-CONFIG_NO_PARENTS = 524288
-CONFIG_OPER_ONLY = 1048576
-CONFIG_READ_WRITE_ACCESS_ONLY = 33554432
-CONFIG_REPLACE = 1024
-CONFIG_SHOW_DEFAULTS = 16
-CONFIG_SUPPRESS_ERRORS = 32768
-CONFIG_TURBO_C = 8388608
-CONFIG_UNHIDE_ALL = 4096
-CONFIG_WITH_DEFAULTS = 8
-CONFIG_WITH_OPER = 128
-CONFIG_WITH_SERVICE_META = 262144
-CONFIG_XML = 1
-CONFIG_XML_LOAD_LAX = 65536
-CONFIG_XML_PRETTY = 512
-CONFIG_XPATH = 256
-DEL_ALL = 2
-DEL_EXPORTED = 3
-DEL_SAFE = 1
-ECHO = 1
-FLAG_CONFIG_CACHE_ONLY = 32
-FLAG_CONFIG_ONLY = 4
-FLAG_DELAYED_WHEN = 64
-FLAG_DELETE = 2
-FLAG_EMIT_PARENTS = 1
-FLAG_HIDE_ALL_HIDEGROUPS = 256
-FLAG_HIDE_INACTIVE = 8
-FLAG_HINT_BULK = 1
-FLAG_NON_RECURSIVE = 4
-FLAG_NO_CONFIG_CACHE = 16
-FLAG_NO_DEFAULTS = 2
-FLAG_SKIP_SUBSCRIBERS = 512
-LOAD_SCHEMAS_LOAD = True
-LOAD_SCHEMAS_RELOAD = 2
-LOAD_SCHEMAS_SKIP = False
-MOVE_AFTER = 3
-MOVE_BEFORE = 2
-MOVE_FIRST = 1
-MOVE_LAST = 4
-NOECHO = 0
-PRODUCT = 'NCS'
-UPGRADE_KILL_ON_TIMEOUT = 1
-```
diff --git a/developer-reference/pyapi/ncs.md b/developer-reference/pyapi/ncs.md
deleted file mode 100644
index 81e4b1b5..00000000
--- a/developer-reference/pyapi/ncs.md
+++ /dev/null
@@ -1,367 +0,0 @@
-# Python ncs Module
-
-NCS Python high level module.
-
-The high-level APIs provided by this module are an abstraction on top of the
-low-level APIs. This makes them easier to use, improves code readability and
-development rate for common use cases, such as service and action callbacks.
-
-As an example, the maagic module greatly simplifies the way of accessing data.
-First it helps in navigating the data model, using standard Python object dot
-notation, giving very clear and readable code. The context handlers remove the
-need to close sockets, user sessions and transactions. Finally, by removing the
-need to know the data types of the leafs, allows you to focus on the program
-logic.
-
-This top module imports the following modules:
-
-* alarm -- NSO alarm handling
-* application -- module for implementing packages and services
-* cdb -- placeholder for low-level _ncs.cdb items
-* dp -- data provider, actions
-* error -- placeholder for low-level _ncs.error items
-* events -- placeholder for low-level _ncs.events items
-* ha -- placeholder for low-level _ncs.ha items
-* log -- logging utilities
-* maagic -- data access module
-* maapi -- MAAPI interface
-* template -- module for working with templates
-* service_log -- module for doing service logging
-* upgrade -- module for writing upgrade components
-* util -- misc utilities
-
-## Submodules
-
-- [ncs.alarm](ncs.alarm.md): NCS Alarm Manager module.
-- [ncs.application](ncs.application.md): Module for building NCS applications.
-- [ncs.cdb](ncs.cdb.md): CDB high level module.
-- [ncs.dp](ncs.dp.md): Callback module for connecting data providers to ConfD/NCS.
-- [ncs.experimental](ncs.experimental.md): Experimental stuff.
-- [ncs.log](ncs.log.md): This module provides some logging utilities.
-- [ncs.maagic](ncs.maagic.md): Confd/NCS data access module.
-- [ncs.maapi](ncs.maapi.md): MAAPI high level module.
-- [ncs.progress](ncs.progress.md): MAAPI progress trace high level module.
-- [ncs.service_log](ncs.service_log.md): This module provides service logging
-- [ncs.template](ncs.template.md): This module implements classes to simplify template processing.
-- [ncs.util](ncs.util.md): Utility module, low level abstrations
-
-## Predefined Values
-
-```python
-
-ACCUMULATE = 1
-ADDR = '127.0.0.1'
-ALREADY_LOCKED = -4
-ATTR_ANNOTATION = 2147483649
-ATTR_BACKPOINTER = 2147483651
-ATTR_INACTIVE = 0
-ATTR_ORIGIN = 2147483655
-ATTR_ORIGINAL_VALUE = 2147483653
-ATTR_OUT_OF_BAND = 2147483664
-ATTR_REFCOUNT = 2147483650
-ATTR_TAGS = 2147483648
-ATTR_WHEN = 2147483652
-CANDIDATE = 1
-CMP_EQ = 1
-CMP_GT = 3
-CMP_GTE = 4
-CMP_LT = 5
-CMP_LTE = 6
-CMP_NEQ = 2
-CMP_NOP = 0
-CONFD_EOF = -2
-CONFD_ERR = -1
-CONFD_OK = 0
-CONFD_PORT = 4565
-CS_NODE_CMP_NORMAL = 0
-CS_NODE_CMP_SNMP = 1
-CS_NODE_CMP_SNMP_IMPLIED = 2
-CS_NODE_CMP_UNSORTED = 4
-CS_NODE_CMP_USER = 3
-CS_NODE_HAS_DISPLAY_WHEN = 1024
-CS_NODE_HAS_META_DATA = 2048
-CS_NODE_HAS_MOUNT_POINT = 32768
-CS_NODE_HAS_WHEN = 512
-CS_NODE_IS_ACTION = 8
-CS_NODE_IS_CASE = 128
-CS_NODE_IS_CDB = 4
-CS_NODE_IS_CONTAINER = 256
-CS_NODE_IS_DYN = 1
-CS_NODE_IS_LEAFREF = 16384
-CS_NODE_IS_LEAF_LIST = 8192
-CS_NODE_IS_LIST = 1
-CS_NODE_IS_NOTIF = 64
-CS_NODE_IS_PARAM = 16
-CS_NODE_IS_RESULT = 32
-CS_NODE_IS_STRING_AS_BINARY = 65536
-CS_NODE_IS_WRITE = 2
-CS_NODE_IS_WRITE_ALL = 4096
-C_BINARY = 39
-C_BIT32 = 29
-C_BIT64 = 30
-C_BITBIG = 50
-C_BOOL = 17
-C_BUF = 5
-C_CDBBEGIN = 37
-C_DATE = 20
-C_DATETIME = 19
-C_DECIMAL64 = 43
-C_DEFAULT = 42
-C_DOUBLE = 14
-C_DQUAD = 46
-C_DURATION = 27
-C_EMPTY = 53
-C_ENUM_HASH = 28
-C_ENUM_VALUE = 28
-C_HEXSTR = 47
-C_IDENTITYREF = 44
-C_INT16 = 7
-C_INT32 = 8
-C_INT64 = 9
-C_INT8 = 6
-C_IPV4 = 15
-C_IPV4PREFIX = 40
-C_IPV4_AND_PLEN = 48
-C_IPV6 = 16
-C_IPV6PREFIX = 41
-C_IPV6_AND_PLEN = 49
-C_LIST = 31
-C_NOEXISTS = 1
-C_OBJECTREF = 34
-C_OID = 38
-C_PTR = 36
-C_QNAME = 18
-C_STR = 4
-C_SYMBOL = 3
-C_TIME = 23
-C_UINT16 = 11
-C_UINT32 = 12
-C_UINT64 = 13
-C_UINT8 = 10
-C_UNION = 35
-C_XMLBEGIN = 32
-C_XMLBEGINDEL = 45
-C_XMLEND = 33
-C_XMLMOVEAFTER = 52
-C_XMLMOVEFIRST = 51
-C_XMLTAG = 2
-DB_INVALID = 0
-DB_VALID = 1
-DEBUG = 1
-DELAYED_RESPONSE = 2
-EOF = -2
-ERR = -1
-ERRCODE_ACCESS_DENIED = 3
-ERRCODE_APPLICATION = 4
-ERRCODE_APPLICATION_INTERNAL = 5
-ERRCODE_DATA_MISSING = 8
-ERRCODE_INCONSISTENT_VALUE = 2
-ERRCODE_INTERNAL = 7
-ERRCODE_INTERRUPT = 9
-ERRCODE_IN_USE = 0
-ERRCODE_PROTO_USAGE = 6
-ERRCODE_RESOURCE_DENIED = 1
-ERRINFO_KEYPATH = 0
-ERRINFO_STRING = 1
-ERR_ABORTED = 49
-ERR_ACCESS_DENIED = 3
-ERR_ALREADY_EXISTS = 2
-ERR_APPLICATION_INTERNAL = 39
-ERR_BADPATH = 8
-ERR_BADSTATE = 17
-ERR_BADTYPE = 5
-ERR_BAD_CONFIG = 36
-ERR_BAD_KEYREF = 14
-ERR_CLI_CMD = 59
-ERR_DATA_MISSING = 58
-ERR_EOF = 45
-ERR_EXTERNAL = 19
-ERR_HA_ABORT = 71
-ERR_HA_BADCONFIG = 69
-ERR_HA_BADFXS = 27
-ERR_HA_BADNAME = 29
-ERR_HA_BADTOKEN = 28
-ERR_HA_BADVSN = 52
-ERR_HA_BIND = 30
-ERR_HA_CLOSED = 26
-ERR_HA_CONNECT = 25
-ERR_HA_NOTICK = 31
-ERR_HA_WITH_UPGRADE = 47
-ERR_INCONSISTENT_VALUE = 38
-ERR_INTERNAL = 18
-ERR_INUSE = 11
-ERR_INVALID_INSTANCE = 43
-ERR_LIB_NOT_INITIALIZED = 34
-ERR_LOCKED = 10
-ERR_MALLOC = 20
-ERR_MISSING_INSTANCE = 42
-ERR_MUST_FAILED = 41
-ERR_NOEXISTS = 1
-ERR_NON_UNIQUE = 13
-ERR_NOSESSION = 22
-ERR_NOSTACK = 9
-ERR_NOTCREATABLE = 6
-ERR_NOTDELETABLE = 7
-ERR_NOTMOVABLE = 46
-ERR_NOTRANS = 61
-ERR_NOTSET = 12
-ERR_NOT_IMPLEMENTED = 51
-ERR_NOT_WRITABLE = 4
-ERR_NO_MOUNT_ID = 67
-ERR_OS = 24
-ERR_POLICY_COMPILATION_FAILED = 54
-ERR_POLICY_EVALUATION_FAILED = 55
-ERR_POLICY_FAILED = 53
-ERR_PROTOUSAGE = 21
-ERR_RESOURCE_DENIED = 37
-ERR_STALE_INSTANCE = 68
-ERR_START_FAILED = 57
-ERR_SUBAGENT_DOWN = 33
-ERR_TIMEOUT = 48
-ERR_TOOMANYTRANS = 23
-ERR_TOO_FEW_ELEMS = 15
-ERR_TOO_MANY_ELEMS = 16
-ERR_TOO_MANY_SESSIONS = 35
-ERR_TRANSACTION_CONFLICT = 70
-ERR_UNAVAILABLE = 44
-ERR_UNSET_CHOICE = 40
-ERR_UPGRADE_IN_PROGRESS = 60
-ERR_VALIDATION_WARNING = 32
-ERR_XPATH = 50
-EXEC_COMPARE = 13
-EXEC_CONTAINS = 11
-EXEC_DERIVED_FROM = 9
-EXEC_DERIVED_FROM_OR_SELF = 10
-EXEC_RE_MATCH = 8
-EXEC_STARTS_WITH = 7
-EXEC_STRING_COMPARE = 12
-FALSE = 0
-FIND_NEXT = 0
-FIND_SAME_OR_NEXT = 1
-HKP_MATCH_FULL = 3
-HKP_MATCH_HKP = 2
-HKP_MATCH_NONE = 0
-HKP_MATCH_TAGS = 1
-INTENDED = 7
-IN_USE = -5
-ITER_CONTINUE = 3
-ITER_RECURSE = 2
-ITER_STOP = 1
-ITER_SUSPEND = 4
-ITER_UP = 5
-ITER_WANT_ANCESTOR_DELETE = 2
-ITER_WANT_ATTR = 4
-ITER_WANT_CLI_ORDER = 1024
-ITER_WANT_CLI_STR = 8
-ITER_WANT_LEAF_FIRST_ORDER = 32
-ITER_WANT_LEAF_LAST_ORDER = 64
-ITER_WANT_PREV = 1
-ITER_WANT_P_CONTAINER = 256
-ITER_WANT_REVERSE = 128
-ITER_WANT_SCHEMA_ORDER = 16
-ITER_WANT_SUPPRESS_OPER_DEFAULTS = 2048
-LF_AND = 1
-LF_CMP = 3
-LF_CMP_LL = 7
-LF_EXEC = 5
-LF_EXISTS = 4
-LF_NOT = 2
-LF_OR = 0
-LF_ORIGIN = 6
-LIB_API_VSN = 134610944
-LIB_API_VSN_STR = '08060000'
-LIB_PROTO_VSN = 86
-LIB_PROTO_VSN_STR = '86'
-LIB_VSN = 134610944
-LIB_VSN_STR = '08060000'
-LISTENER_CLI = 8
-LISTENER_IPC = 1
-LISTENER_NETCONF = 2
-LISTENER_SNMP = 4
-LISTENER_WEBUI = 16
-LOAD_SCHEMA_HASH = 65536
-LOAD_SCHEMA_NODES = 1
-LOAD_SCHEMA_TYPES = 2
-MMAP_SCHEMAS_FIXED_ADDR = 2
-MMAP_SCHEMAS_KEEP_SIZE = 1
-MOP_ATTR_SET = 6
-MOP_CREATED = 1
-MOP_DELETED = 2
-MOP_MODIFIED = 3
-MOP_MOVED_AFTER = 5
-MOP_VALUE_SET = 4
-NCS_ERR_CONNECTION_CLOSED = 64
-NCS_ERR_CONNECTION_REFUSED = 56
-NCS_ERR_CONNECTION_TIMEOUT = 63
-NCS_ERR_DEVICE = 65
-NCS_ERR_SERVICE_CONFLICT = 62
-NCS_ERR_TEMPLATE = 66
-NCS_LISTENER_NETCONF_CALL_HOME = 32
-NCS_PORT = 4569
-NO_DB = 0
-OK = 0
-OPERATIONAL = 4
-PATH = None
-PORT = 4569
-PRE_COMMIT_RUNNING = 6
-PROGRESS_INFO = 3
-PROGRESS_START = 1
-PROGRESS_STOP = 2
-PROTO_CONSOLE = 4
-PROTO_HTTP = 6
-PROTO_HTTPS = 7
-PROTO_SSH = 2
-PROTO_SSL = 5
-PROTO_SYSTEM = 3
-PROTO_TCP = 1
-PROTO_TLS = 9
-PROTO_TRACE = 3
-PROTO_UDP = 8
-PROTO_UNKNOWN = 0
-QUERY_HKEYPATH = 1
-QUERY_HKEYPATH_VALUE = 2
-QUERY_STRING = 0
-QUERY_TAG_VALUE = 3
-READ = 1
-READ_WRITE = 2
-RUNNING = 2
-SERIAL_HKEYPATH = 2
-SERIAL_NONE = 0
-SERIAL_TAG_VALUE = 3
-SERIAL_VALUE_T = 1
-SILENT = 0
-SNMP_COL_ROW = 3
-SNMP_Counter32 = 6
-SNMP_Counter64 = 9
-SNMP_INTEGER = 1
-SNMP_Interger32 = 2
-SNMP_IpAddress = 5
-SNMP_NULL = 0
-SNMP_OBJECT_IDENTIFIER = 4
-SNMP_OCTET_STRING = 3
-SNMP_OID = 2
-SNMP_Opaque = 8
-SNMP_TimeTicks = 7
-SNMP_Unsigned32 = 10
-SNMP_VARIABLE = 1
-STARTUP = 3
-TIMEZONE_UNDEF = -111
-TRACE = 2
-TRANSACTION = 5
-TRANS_CB_FLAG_FILTERED = 1
-TRUE = 1
-USESS_FLAG_FORWARD = 1
-USESS_FLAG_HAS_IDENTIFICATION = 2
-USESS_FLAG_HAS_OPAQUE = 4
-USESS_LOCK_MODE_EXCLUSIVE = 2
-USESS_LOCK_MODE_NONE = 0
-USESS_LOCK_MODE_PRIVATE = 1
-USESS_LOCK_MODE_SHARED = 3
-VALIDATION_FLAG_COMMIT = 2
-VALIDATION_FLAG_TEST = 1
-VALIDATION_WARN = -3
-VERBOSITY_DEBUG = 3
-VERBOSITY_NORMAL = 0
-VERBOSITY_VERBOSE = 1
-VERBOSITY_VERY_VERBOSE = 2
-```
diff --git a/developer-reference/pyapi/ncs.progress.md b/developer-reference/pyapi/ncs.progress.md
deleted file mode 100644
index 7303bec4..00000000
--- a/developer-reference/pyapi/ncs.progress.md
+++ /dev/null
@@ -1,134 +0,0 @@
-# Python ncs.progress Module
-
-MAAPI progress trace high level module.
-
-This module defines a high level interface to the low-level maapi functions.
-
-In the Progress Trace a span is used to meassure duration of an event, the
-'start' and 'stop' messages in the progress trace log:
-
-start,2023-08-28T10:42:51.249865,,,,45,306,running,cli,,"foobar"...
-...
-stop,2023-08-28T10:42:51.284359,0.034494,,,45,306,running,cli,,"foobar"...
-
-maapi.Transaction.start_progress_span() and
-maapi.Maapi.start_progress_span() return progress.Span objects, which
-contains the span_id and trace_id (if enabled) attributes. Once the object
-is deleted/exited or manually obj.end() is called the stop message is
-written to the progress trace.
-
-Inside a span multiple sub spans can be created, sp2 in the below example.
-
- import ncs
-
- m = ncs.maapi.Maapi()
- m.start_user_session('admin', my context')
- t = m.start_read_trans()
- sp1 = t.start_progress_span('first span')
- t.progress_info('info message')
- sp2 = t.start_progress_span('second span')
- sp2.end()
- sp1.end()
-
-Another way is to use context managers, which will handle all cleanup
-related to transactions, user sessions and socket connections:
-
- with ncs.maapi.Maapi() as m:
- m.start_user_session('admin', my context')
- with m.start_read_trans() as t:
- with t.start_progress_span('first span'):
- t.progress_info('info message')
- with t.start_progress_span('second span'):
- pass
-
-Finally, a really compact way of doing this:
-
- with ncs.maapi.single_read_trans('admin', 'my context') as t:
- with t.start_progress_span('first span'):
- t.progress_info('info message')
- with t.start_progress_span( 'second span')
- pass
-
-There are multiple optional fields.
-
- with ncs.maapi.single_read_trans('admin', 'my context') as t:
- with t.start_progress_span('calling foo',
- attrs={'sys':'Linux', 'hostname':'bob'}):
- foo()
-
- with ncs.maapi.Maapi() as m:
- m.start_user_session('admin', 'my context')
- action = '/devices/device{ex0}/sync-from'
- with m.start_progress_span('copy running from ex0', path=action):
- m.request_action([], 0, action)
-
- # trace_id1 from an already existing trace
- trace_id1 = 'b1ce20b4-0ca4-4a3e-a448-8df860e622e0'
- with ncs.maapi.single_read_trans('admin', 'my context') as t:
- with t.start_progress_span('perform op related to old trace',
- links=[{'trace_id':trace_id1]}):
- pass
-
-## Functions
-
-### conv_links
-
-```python
-conv_links(links)
-```
-
-convert from [Span() | dict()] -> [dict()]
-
-
-## Classes
-
-### _class_ **EmptySpan**
-
-
-```python
-EmptySpan(span_id=None, trace_id=None)
-```
-
-Members:
-
-
-
-end(...)
-
-Method:
-
-```python
-end(self, *args)
-```
-
-not implemented. no span to end.
-
-
-
-### _class_ **Span**
-
-
-```python
-Span(msock, span_id, trace_id=None)
-```
-
-Members:
-
-
-
-end(...)
-
-Method:
-
-```python
-end(self, annotation=None)
-```
-
-ends a span, the stop event in the progress trace. this function
-is called automatically when the span is deleted i.e. when exiting a
-'with' context.
-
-* annotation -- sets the annotation field for stop events (str)
-
-
-
diff --git a/developer-reference/pyapi/ncs.service_log.md b/developer-reference/pyapi/ncs.service_log.md
deleted file mode 100644
index 1b23a610..00000000
--- a/developer-reference/pyapi/ncs.service_log.md
+++ /dev/null
@@ -1,88 +0,0 @@
-# Python ncs.service_log Module
-
-This module provides service logging
-
-## Classes
-
-### _class_ **ServiceLog**
-
-This class contains methods to write service log entries.
-
-```python
-ServiceLog(node_or_maapi)
-```
-
-Initialize a service log object.
-
-Members:
-
-
-
-debug(...)
-
-Method:
-
-```python
-debug(self, path, msg, type)
-```
-
-Log a debug message.
-
-
-
-
-
-error(...)
-
-Method:
-
-```python
-error(self, path, msg, type)
-```
-
-Log an error message.
-
-
-
-
-
-info(...)
-
-Method:
-
-```python
-info(self, path, msg, type)
-```
-
-Log an information message.
-
-
-
-
-
-trace(...)
-
-Method:
-
-```python
-trace(self, path, msg, type)
-```
-
-Log a trace message.
-
-
-
-
-
-warn(...)
-
-Method:
-
-```python
-warn(self, path, msg, type)
-```
-
-Log an warning message.
-
-
-
diff --git a/developer-reference/pyapi/ncs.template.md b/developer-reference/pyapi/ncs.template.md
deleted file mode 100644
index eaaa2b10..00000000
--- a/developer-reference/pyapi/ncs.template.md
+++ /dev/null
@@ -1,284 +0,0 @@
-# Python ncs.template Module
-
-This module implements classes to simplify template processing.
-
-## Classes
-
-### _class_ **Template**
-
-Class to simplify applying of templates in a NCS service callback.
-
-```python
-Template(service, path=None)
-```
-
-Initialize a Template object.
-
-The 'service' argument is the 'service' variable received in
-decorated cb_create method in a service class.
-('service' can in fact be any maagic.Node (except a Root node)
-instance with an underlying Transaction). It is also possible to
-provide a maapi.Transaction instance for the 'service' argument in
-which case 'path' must also be provided.
-
-Example use:
-
- vars = ncs.template.Variables()
- vars.add('VAR1', 'foo')
- vars.add('VAR2', 'bar')
- vars.add('VAR3', 42)
- template = ncs.template.Template(service)
- template.apply('my-service-template', vars)
-
-Members:
-
-
-
-apply(...)
-
-Method:
-
-```python
-apply(self, name, vars=None, flags=0)
-```
-
-Apply the template 'name'.
-
-The optional argument 'vars' may be provided in form of a
-Variables instance.
-
-Arguments:
-
-* name -- template name (str)
-* vars -- template variables (template.Variables)
-* flags -- template flags (int, optional)
-
-
-
-### _class_ **Variables**
-
-Class to simplify passing of variables when applying a template.
-
-```python
-Variables(init=None)
-```
-
-Initialize a Variables object.
-
-The optional argument 'init' can be any iterable yielding a
-2-tuple in the form (name, value).
-
-Members:
-
-
-
-add(...)
-
-Method:
-
-```python
-add(self, name, value)
-```
-
-Add a value for the variable 'name'.
-
-The value will be quoted before adding it to the internal list.
-
-Quoting works like this:
- If value contains ' all occurrences of " will be replaced by ' and
- the final value will be quoted with ". Otherwise, the final value
- will be quoted with '.
-
-Arguments:
-
-* name -- service variable name (str)
-* value -- variable value (str, int, boolean)
-
-
-
-
-
-add_plain(...)
-
-Method:
-
-```python
-add_plain(self, name, value)
-```
-
-Add a value for the variable 'name'.
-
-It's up to the caller to do proper quoting of value.
-
-For arguments, see Variables.add()
-
-
-
-
-
-append(...)
-
-Method:
-
-```python
-append(self, object, /)
-```
-
-Append object to the end of the list.
-
-
-
-
-
-clear(...)
-
-Method:
-
-```python
-clear(self, /)
-```
-
-Remove all items from list.
-
-
-
-
-
-copy(...)
-
-Method:
-
-```python
-copy(self, /)
-```
-
-Return a shallow copy of the list.
-
-
-
-
-
-count(...)
-
-Method:
-
-```python
-count(self, value, /)
-```
-
-Return number of occurrences of value.
-
-
-
-
-
-extend(...)
-
-Method:
-
-```python
-extend(self, iterable, /)
-```
-
-Extend list by appending elements from the iterable.
-
-
-
-
-
-index(...)
-
-Method:
-
-```python
-index(self, value, start=0, stop=9223372036854775807, /)
-```
-
-Return first index of value.
-
-Raises ValueError if the value is not present.
-
-
-
-
-
-insert(...)
-
-Method:
-
-```python
-insert(self, index, object, /)
-```
-
-Insert object before index.
-
-
-
-
-
-pop(...)
-
-Method:
-
-```python
-pop(self, index=-1, /)
-```
-
-Remove and return item at index (default last).
-
-Raises IndexError if list is empty or index is out of range.
-
-
-
-
-
-remove(...)
-
-Method:
-
-```python
-remove(self, value, /)
-```
-
-Remove first occurrence of value.
-
-Raises ValueError if the value is not present.
-
-
-
-
-
-reverse(...)
-
-Method:
-
-```python
-reverse(self, /)
-```
-
-Reverse *IN PLACE*.
-
-
-
-
-
-sort(...)
-
-Method:
-
-```python
-sort(self, /, *, key=None, reverse=False)
-```
-
-Sort the list in ascending order and return None.
-
-The sort is in-place (i.e. the list itself is modified) and stable (i.e. the
-order of two equal elements is maintained).
-
-If a key function is given, apply it once to each list item and sort them,
-ascending or descending, according to their function values.
-
-The reverse flag can be set to sort in descending order.
-
-
-
diff --git a/developer-reference/pyapi/ncs.util.md b/developer-reference/pyapi/ncs.util.md
deleted file mode 100644
index 706204b3..00000000
--- a/developer-reference/pyapi/ncs.util.md
+++ /dev/null
@@ -1,89 +0,0 @@
-# Python ncs.util Module
-
-Utility module, low level abstrations
-
-## Functions
-
-### get_callpoint_model
-
-```python
-get_callpoint_model()
-```
-
-Get configured callpoint model
-
-### get_self_assign_warning
-
-```python
-get_self_assign_warning()
-```
-
-Return current self assign warning type.
-
-### get_setattr_fun
-
-```python
-get_setattr_fun(obj, parent)
-```
-
-Return setattr fun to use for setting attributes, will use
-return a wrapped setattr function with sanity checks if enabled.
-
-### is_multiprocessing
-
-```python
-is_multiprocessing()
-```
-
-Return True if the configured callpoint model is multiprocessing
-
-### mk_yang_date_and_time
-
-```python
-mk_yang_date_and_time(dt=None)
-```
-
-Create a timezone aware datetime object in ISO8601 string format.
-
-This method is used to convert a datetime object to its timezone aware
-counterpart and return a string useful for a 'yang:date-and-time' leaf.
-If 'dt' is None the current time will be used.
-
-Arguments:
- dt -- a datetime object to be converted (optional)
-
-### set_callpoint_model
-
-```python
-set_callpoint_model(model)
-```
-
-Update environment with provided callpoint model
-
-### set_kill_child_on_parent_exit
-
-```python
-set_kill_child_on_parent_exit()
-```
-
-Multi OS variant of _ncs.set_kill_child_on_parent_exit falling back
-to kqueue if the OS supports it.
-
-### set_self_assign_warning
-
-```python
-set_self_assign_warning(warning)
-```
-
-Set self assign warning type.
-
-### with_setattr_check
-
-```python
-with_setattr_check(path)
-```
-
-Use as context manager enabling set attribute check for the
-current thread while in the manager.
-
-
diff --git a/developer-reference/restconf-api/README.md b/developer-reference/restconf-api/README.md
deleted file mode 100644
index 0e1c49f6..00000000
--- a/developer-reference/restconf-api/README.md
+++ /dev/null
@@ -1,16 +0,0 @@
----
-description: Implementation details for RESTCONF.
-icon: code
----
-
-# RESTCONF API
-
-The NSO RESTCONF documentation covers implementation details and extension to or deviation from the RESTCONF RFC 8040 and YANG RFC 7950 respectively. The IETF RESTCONF and YANG RFCs are the main reference guides for the NSO RESTCONF interface, while the NSO documentation complements the RFCs.
-
-{% embed url="https://datatracker.ietf.org/doc/html/rfc8040" %}
-
-{% embed url="https://datatracker.ietf.org/doc/html/rfc7950" %}
-
-{% embed url="https://datatracker.ietf.org/doc/html/rfc7951" %}
-
-{% embed url="https://cisco-tailf.gitbook.io/nso-docs/guides/development/core-concepts/northbound-apis/restconf-api" %}
diff --git a/developer-reference/snmp-agent.md b/developer-reference/snmp-agent.md
deleted file mode 100644
index 76c1712f..00000000
--- a/developer-reference/snmp-agent.md
+++ /dev/null
@@ -1,10 +0,0 @@
----
-description: Description of SNMP agent.
-icon: message-bot
----
-
-# SNMP Agent
-
-Visit the link below to learn more.
-
-{% embed url="https://cisco-tailf.gitbook.io/nso-docs/guides/development/core-concepts/northbound-apis/nso-snmp-agent" %}
diff --git a/developer-reference/xpath.md b/developer-reference/xpath.md
deleted file mode 100644
index b05632e4..00000000
--- a/developer-reference/xpath.md
+++ /dev/null
@@ -1,14 +0,0 @@
----
-description: Implementation details for XPath.
-icon: value-absolute
----
-
-# XPath
-
-The NSO XPath documentation covers implementation details and extension to or deviation from the XPath 1.0 documentation and YANG RFC 7950 XPath extensions respectively. The XPath 1.0 documentation and YANG RFCs are the main reference guides for the NSO XPath implementation, while the NSO documentation complements them.
-
-{% embed url="https://www.w3.org/TR/1999/REC-xpath-19991116/" %}
-
-{% embed url="https://datatracker.ietf.org/doc/html/rfc7950#section-10" %}
-
-{% embed url="https://nso-docs.cisco.com/guides/resources/index#section-5-file-formats-and-syntax" %}
diff --git a/development/advanced-development/README.md b/development/advanced-development/README.md
deleted file mode 100644
index 33071de0..00000000
--- a/development/advanced-development/README.md
+++ /dev/null
@@ -1,7 +0,0 @@
----
-description: Advanced-level NSO development.
-icon: stairs
----
-
-# Advanced Development
-
diff --git a/development/advanced-development/developing-alarm-applications.md b/development/advanced-development/developing-alarm-applications.md
deleted file mode 100644
index 3d49755e..00000000
--- a/development/advanced-development/developing-alarm-applications.md
+++ /dev/null
@@ -1,383 +0,0 @@
----
-description: Manipulate NSO alarm table using the dedicated Alarm APIs.
----
-
-# Developing Alarm Applications
-
-This section focuses on how to manipulate the NSO alarm table using the dedicated Alarm APIs. Make sure that the concepts in the [Alarm Manager](../../operation-and-usage/operations/alarm-manager.md) introduction are well understood before reading this section.
-
-The Alarm API provides a simplified way of managing your alarms for the most common alarm management use cases. The API is divided into a producer and a consumer part.
-
-The producer part provides an alarm sink. Using an alarm sink, you can submit your alarms into the system. The alarms are then queued and fed into the NSO alarm list. You can have multiple alarm sinks active at any time.
-
-The consumer part provides an Alarm Source. The alarm source lets you listen to new alarms and alarm changes. As with the producer side, you can have multiple alarm sources listening for new and changed alarms in parallel.
-
-The diagram below shows a high-level view of the flow of alarms in and out of the system. Alarms are received, e.g. as SNMP notifications, and fed into the NSO Alarm List. At the other end, you subscribe for the alarm changes.
-
-
The Alarm Flow
-
-## Using the Alarm Sink
-
-The producer part of the Alarm API can be used in the following modes:
-
-* **Centralized Mode**\
- This is the preferred mode for NSO. In the centralized mode, we submit alarms to a central alarm writer that optimizes the number of sessions towards the CDB. The NSO Java VM will set up the centralized alarm sink at start-up which will be available for all Java components run by the NSO Java VM.
-* **Local Mode**\
- In the local mode, we submit alarms directly into the CDB. In this case, each Alarm Sink keeps its own CDB session. This mode is the recommended mode for applications run outside of the NSO Java VM or Java components that have a specific need for controlling the CDB session.
-
-The difference between the two modes is manifested by the way you retrieve the `AlarmSink` instance to use for alarm submission. For submitting an alarm in centralized mode a prerequisite is that a central alarm sink has been set up within your JVM. For components in the NSO java VM, this is done for you. For applications outside of the NSO java VM that want to utilize the centralized mode, you need to get a `AlarmSinkCentral` instance. This instance has to be started and the central will then execute in a separate thread. The application needs to maintain this instance and stop it when the application finishes.
-
-{% code title="Retrieving and Starting an AlarmSinkCentral" %}
-```
- Socket socket = new Socket("127.0.0.1",Conf.NCS_PORT);
- Maapi maapi = new Maapi(socket);
-
- AlarmSinkCentral sinkCentral = new AlarmSinkCentral(1000, maapi);
- sinkCentral.start();
-```
-{% endcode %}
-
-The centralized alarm sink can then be retrieved using the default constructor in the `AlarmSink` class for components in the NSO Java VM.
-
-{% code title="Retrieving AlarmSink using Centralized Mode" %}
-```
- AlarmSink sink = new AlarmSink();
-```
-{% endcode %}
-
-For applications outside the NSO Java VM, the `AlarmSinkCentral` needs to be supplied when constructing the alarm sink.
-
-{% code title="Retrieving AlarmSink outside NSO Java VM" %}
-```
- AlarmSink sink = new AlarmSink(sinkCentral);
-```
-{% endcode %}
-
-When submitting an alarm using the local mode, you need a Maapi socket and a `Maapi` instance. The local mode alarm sink needs the `Maapi` instance to write alarm info to CDB. The local alarm sink is retrieved using a constructor with a `Maapi` instance as an argument.
-
-{% code title="Retrieving AlarmSink using Local Mode" %}
-```
- Socket socket = new Socket("127.0.0.1",Conf.NCS_PORT);
- Maapi maapi = new Maapi(socket);
-
- AlarmSink sink = AlarmSink(maapi);
-```
-{% endcode %}
-
-The `sink.submitAlarm(...)` method provided by the `AlarmSink` instance can be used in both centralized and local mode to submit an alarm.
-
-{% code title="Alarm Submit" %}
-```java
- package com.tailf.ncs.alarmman.producer;
- ...
- /**
- * Submits the specified Alarm into the alarm list.
- * If the alarms key
- * "managedDevice, managedObject, alarmType, specificProblem" already
- * exists, the existing alarm will be updated with a
- * new status change entry.
- *
- * Alarm identity:
- *
- * @param managedDevice the managed device which emits the alarm.
- *
- * @param managedObject the managed object emitting the alarm.
- *
- * @param alarmtype the alarm type of the alarm.
- *
- * @param specificProblem is used when the alarmtype cannot uniquely
- * identify the alarm type. Normally, this is not the case,
- * and this leaf is the empty string.
- *
- * Status change within the alarm:
- * @param severity the severity of the alarm.
- * @param alarmText the alarm text
- * @param impactedObjects Objects that might be affected by this alarm
- * @param relatedAlarms Alarms related to this alarm
- * @param rootCauseObjects Objects that are candidates for causing the
- * alarm.
- * @param timeStamp The time the status of the alarm changed,
- * as reported by the device
- * @param customAttributes Custom attributes
- *
- * @return boolean true/false whether the submitting the specified
- * alarm was successful
- *
- * @throws IOException
- * @throws ConfException
- * @throws NavuException
- */
- public synchronized boolean
- submitAlarm(ManagedDevice managedDevice,
- ManagedObject managedObject,
- ConfIdentityRef alarmtype,
- ConfBuf specificProblem,
- PerceivedSeverity severity,
- ConfBuf alarmText,
- List impactedObjects,
- List relatedAlarms,
- List rootCauseObjects,
- ConfDatetime timeStamp,
- Attribute ... customAttributes)
- throws NavuException, ConfException, IOException {
- ..
- }
-
- ...
- }
-```
-{% endcode %}
-
-Below is an example showing how to submit alarms using the centralized mode, which is the normal scenario for components running inside the NSO Java VM. In the example, we create an alarm sink and submit an alarm.
-
-{% code title="Submitting an Alarm in a Centralized Environment" %}
-```
- ...
- AlarmSink sink = new AlarmSink();
- ...
-
- // Submit the alarm.
-
- sink.submitAlarm(new ManagedDevice("device0"),
- new ManagedObject("/ncs:devices/device{device0}"),
- new ConfIdentityRef(new MyAlarms().hash(),
- MyAlarms._device_on_fire),
- PerceivedSeverity.INDETERMINATE,
- "Indeterminate Alarm",
- null,
- null,
- null,
- ConfDatetime.getConfDatetime(),
- new AlarmAttribute(new myAlarm(), // A custom alarm attribute
- myAlarm._custom_alarm_attribute_,
- new ConfBuf("this is an alarm attribute")),
- new StatusChangeAttribute(new myAlarm(), // A custom status change attribute
- myAlarm._custom_status_change_attribute_,
- new ConfBuf("this is a status change attribute")));
- ...
-```
-{% endcode %}
-
-## Using the Alarm Source
-
-In contrast to the alarm source, the alarm sink only operates in centralized mode. Therefore, before being able to consume alarms using the alarm API you need to set up a central alarm source. If you are executing components in the scope of the NSO Java VM this central alarm source is already set up for you.
-
-You typically set up a central alarm source if you have a stand-alone application executing outside the NSO Java VM. Setting up a central alarm source is similar to setting up a central alarm sink. You need to retrieve a `AlarmSourceCentral`. Your application needs to maintain this instance, which implies starting it at initialization and stopping it when the application finishes.
-
-{% code title="Setting up an Alarm Source Central" %}
-```
- socket = new Socket("127.0.0.1",Conf.NCS_PORT);
- cdb = new Cdb("MySourceCentral", socket);
-
- source = new AlarmSourceCentral(MAX_QUEUE_CAPACITY, cdb);
- source.start();
-```
-{% endcode %}
-
-The central alarm source subscribes to changes in the alarm list and forwards them to the instantiated alarm sources. The alarms are broadcast to the alarm sources. This means that each alarm source will receive its own copy of the alarm.
-
-The alarm source promotes two ways of receiving alarms:
-
-* **Take**\
- Block execution until an alarm is received.
-* **Poll**\
- Wait for the alarm with a timeout. If you do not receive an alarm within the stated time frame, the call will return.
-
-{% code title="AlarmSource Receiving Methods" %}
-```java
-package com.tailf.ncs.alarmman.consumer;
-...
-public class AlarmSource {
- ...
-
- /**
- * Waits indefinitely for a new alarm or until the
- * queue is interrupted.
- *
- * @return a new alarm.
- * @throws InterruptedException
- */
- public Alarm takeAlarm() throws InterruptedException{
- ...
- }
-
- ...
-
- /**
- * Waits until the next alarm comes or until the time has expired.
- *
- * @param time time to wait.
- * @param unit
- * @return a new alarm or null it timeout expired.
- * @throws InterruptedException
- */
- public Alarm pollAlarm(int time, TimeUnit unit)
- throws InterruptedException{
- ...
- }
-```
-{% endcode %}
-
-As soon as you create an alarm source object, the alarm source object will start receiving alarms. If you do not poll or take any alarms from the alarm source object, the queue will fill up until it reaches the maximum number of queued alarms as specified by the alarm source central. The alarm source central will then start to drop the oldest alarms until the alarm source starts the retrieval. This only affects the alarm source that is lagging behind. Any other alarm sources that are active at the same time will receive alarms without discontinuation.
-
-{% code title="Consuming alarms inside NSO Java VM" %}
-```
- AlarmSource mySource = new AlarmSource();
-
- Alarm lAlarm = mySource.pollAlarm();
-
- while (lAlarm != null){
- //handle alarm
- }
-```
-{% endcode %}
-
-{% code title="Consuming alarms outside NSO Java VM" %}
-```
- AlarmSource mySource = new AlarmSource(source);
-
- Alarm lAlarm = mySource.pollAlarm();
-
- while (lAlarm != null){
- //handle alarm
- }
-```
-{% endcode %}
-
-## Extending the Alarm Manager, Adding User-defined Alarm Types and Fields
-
-The NSO alarm manager is extendable. NSO itself has a number of built-in alarms. The user can add user-defined alarms. In the website example, we have a small YANG module that extends the set of alarm types.
-
-We have in the module `my-alarms.yang` the following alarm type extension:
-
-{% code title="Extending Alarm Type" %}
-```yang
- module my-alarms {
- namespace "http://examples.com/ma";
- prefix ma;
-
- ....
-
- import tailf-ncs-alarms {
- prefix al;
- }
-
- import tailf-common {
- prefix tailf;
- }
-
- identity website-alarm {
- base al:alarm-type;
- }
-
- identity webserver-on-fire {
- base website-alarm;
- }
-```
-{% endcode %}
-
-The `identity` statement in the YANG language is used for this type of constructs. To complete our alarm type extension we also need to populate configuration data related to the new alarm type. A good way to do that is to provide XML data in a CDB initialization file and place this file in the `ncs-cdb` directory:
-
-{% code title="my-alarms.xml" %}
-```xml
-
-
-
- ma:webserver-on-fire
- equipmentAlarm
- true
- root-cause
- 957
-
-
-
-```
-{% endcode %}
-
-Another possibility of extension is to add fields to the existing NSO alarms. This can be useful if you want to add extra fields for attributes not directly supported by the NSO alarm list.
-
-Below is an example showing how to extend the alarm and the alarm status.
-
-{% code title="Extending alarm model" %}
-```yang
-module my-alarms {
- namespace "http://examples.com/ma";
- prefix ma;
-
- ....
-
- augment /al:alarms/al:alarm-list/al:alarm {
- leaf custom-alarm-attribute {
- type string;
- }
- }
-
- augment /al:alarms/al:alarm-list/al:alarm/al:status-change {
- leaf custom-status-change-attribute {
- type string;
- }
- }
-}
-```
-{% endcode %}
-
-## Mapping Alarms to Objects
-
-One of the strengths of the NSO model structure is the correlation capabilities. Whenever NSO FASTMAP creates a new service it creates a back pointer reference to the service that caused the device modification to take place. NSO template-based services will generate these pointers by default. For Java-based services, back pointers are created when the `createdShared` method is used. These pointers can be retrieved and used as input to the impacted objects parameter of a raised alarm.
-
-The impacted objects of the alarm are the objects that are affected by the alarm i.e. depending on the alarming objects, or the root cause objects. For NSO, this typically means services that have created the device configuration. An impacted object should therefore point to a service that may suffer from this alarm.
-
-The root cause object is another important object of the alarm. It describes the object that likely is the original cause of the alarm. Note that this is not the same thing as the alarming object. The alarming object is the object that raised the alarm, while the root cause object is the primary suspect for causing the alarm. In NSO, any object can raise alarms, it may be a service, a device, or something else.
-
-{% code title="Finding Back Pointers for a Given Device Path" %}
-```
- private List findImpactedObjects(String path)
- throws ConfException, IOException
- {
-
- List objs = new ArrayList();
-
- int th = -1;
- try {
- //A helper object that can return the topmost tag (not key)
- //and that can reduce the path by one tag at a time (parent)
- ExtConfPath p = new ExtConfPath(path);
-
- // Start a read transaction towards the running configuration.
- th = maapi.startTrans(Conf.DB_RUNNING, Conf.MODE_READ);
-
- while(!(p.topTag().equals("config")
- || p.topTag().equals("ncs:config"))){
-
- //Check for back pointer
- ConfAttributeValue[] vals = this.maapi.getAttrs(th,
- new ConfAttributeType[] {ConfAttributeType.BACKPOINTER},
- p.toString());
-
- for(ConfAttributeValue v : vals){
- ConfList refs = (ConfList)v.getAttributeValue();
- for (ConfObject co : refs.elements()){
- ManagedObject mo = new ManagedObject((ConfObjectRef)co);
- objs.add(mo);
- }
- }
-
- p = p.parent();
- }
- }
- catch (IOException ioe){
- LOGGER.warn("Could not access Maapi, "
- +" aborting mapping attempt of impacted objects");
- }
- catch (ConfException ce){
- ce.printStackTrace();
- LOGGER.warn("Failed to retrieve Attributes via Maapi");
- }
- finally {
- maapi.finishTrans(th);
- }
- return objs;
- }
-```
-{% endcode %}
diff --git a/development/advanced-development/developing-neds/README.md b/development/advanced-development/developing-neds/README.md
deleted file mode 100644
index 1ba898b1..00000000
--- a/development/advanced-development/developing-neds/README.md
+++ /dev/null
@@ -1,506 +0,0 @@
----
-description: Develop your own NEDs to integrate unsupported devices in your network.
----
-
-# Developing NEDs
-
-## Creating a NED
-
-A Network Element Driver (NED) represents a key NSO component that allows NSO to communicate southbound with network devices. The device YANG models contained in the Network Element Drivers (NEDs) enable NSO to store device configurations in the CDB and expose a uniform API to the network for automation. The YANG models can cover only a tiny subset of the device or all of the device. Typically, the YANG models contained in a NED represent the subset of the device's configuration data, state data, Remote Procedure Calls, and notifications to be managed using NSO.
-
-This guide provides information on NED development, focusing on building your own NED package. For a general introduction to NEDs, Cisco-provided NEDs, and NED administration, refer to the [NED Administration](../../../administration/management/ned-administration.md) in Administration.
-
-## Types of NED Packages
-
-A NED package allows NSO to manage a network device of a specific type. NEDs typically contain YANG models and the code, specifying how NSO should configure and retrieve status. When developing your own NED, there are four categories supported by NSO.
-
-* A NETCONF NED is used with the NSO's built-in NETCONF client and requires no code. Only YANG models. This NED is suitable for devices that strictly follow the specification for the NETCONF protocol and YANG mappings to NETCONF targeting a standardized machine-to-machine interface.
-* CLI NED targeted devices that use a Cisco-style CLI as a human-to-machine configuration interface. Various YANG extensions are used to annotate the YANG model representation of the device together with code-converting data between NSO and device formats.
-* A generic NED is typically used to communicate with non-CLI devices, such as devices using protocols like REST, TL1, Corba, SOAP, RESTCONF, or gNMI as a configuration interface. Even NETCONF-enabled devices often require a generic NED to function properly with NSO.
-* NSO's built-in SNMP client can manage SNMP devices by supplying NSO with the MIBs, with some additional declarative annotations and code to handle the communication to the device. Usually, this legacy protocol is used to read state data. Albeit limited, NSO has support for configuring devices using SNMP.
-
-In summary, the NETCONF and SNMP NEDs use built-in NSO clients; the CLI NED is model-driven, whereas the generic NED requires a Java program to translate operations toward the device.
-
-## Dumb Versus Capable Devices
-
-NSO differentiates between managed devices that can handle transactions and devices that can not. This discussion applies regardless of NED type, i.e., NETCONF, SNMP, CLI, or Generic.
-
-NEDs for devices that cannot handle abort must indicate so in the reply of the `newConnection()` method indicating that the NED wants a reverse diff in case of an abort. Thus, NSO has two different ways to abort a transaction towards a NED, invoke the `abort()` method with or without a generated reverse diff.
-
-For non-transactional devices, we have no other way of trying out a proposed configuration change than to send the change to the device and see what happens.
-
-The table below shows the seven different data-related callbacks that could or must be implemented by all NEDs. It also differentiates between 4 different types of devices and what the NED must do in each callback for the different types of devices.
-
-The table below displays the device types:
-
-
Non transactional devices
Transactional devices
Transactional devices with confirmed commit
Fully capable NETCONF server
SNMP, Cisco IOS, NETCONF devices with startup+running.
Devices that can abort, NETCONF devices without confirmed commit.
Cisco XR type of devices.
ConfD, Junos.
-
-**INITIALIZE**: The initialize phase is used to initialize a transaction. For instance, if locking or other transaction preparations are necessary, they should be performed here. This callback is not mandatory to implement if no NED-specific transaction preparations are needed.
-
-
Non transactional devices
Transactional devices
Transactional devices with confirmed commit
Fully capable NETCONF server
initialize(). NED code shall make the device go into config mode (if applicable) and lock (if applicable).
initialize(). NED code shall start a transaction on the device.
initialize(). NED code shall do the equivalent of configure exclusive.
Built in, NSO will lock.
-
-**UNINITIALIZE**: If the transaction is not completed and the NED has done INITIALIZE, this method is called to undo the transaction preparations, that is restoring the NED to the state before INITIALIZE. This callback is not mandatory to implement if no NED-specific preparations were performed in INITIALIZE.
-
-
Non transactional devices
Transactional devices
Transactional devices with confirmed commit
Fully capable NETCONF server
uninitialize(). NED code shall unlock (if applicable).
uninitialize(). NED code shall abort the transaction.
uninitialize(). NED code shall abort the transaction.
Built in, NSO will unlock.
-
-**PREPARE**: In the prepare phase, the NEDs get exposed to all the changes that are destined for each managed device handled by each NED. It is the responsibility of the NED to determine the outcome here. If the NED replies successfully from the prepare phase, NSO assumes the device will be able to go through with the proposed configuration change.
-
-
Non transactional devices
Transactional devices
Transactional devices with confirmed commit
Fully capable NETCONF server
prepare(Data). NED code shall send all data to the device.
prepare(Data). NED code shall add Data to the transaction and validate.
prepare(Data). NED code shall add Data to the transaction and validate.
Built in, NSO will edit-config towards the candidate, validate and commit confirmed with a timeout.
-
-**ABORT**: If any participants in the transaction reject the proposed changes, all NEDs will be invoked in the `abort()` method for each managed device the NED handles. It is the responsibility of the NED to make sure that whatever was done in the PREPARE phase is undone. For NEDs that indicate as a reply in `newConnection()` that they want the reverse diff, they will get the reverse data as a parameter here.
-
-
Non transactional devices
Transactional devices
Transactional devices with confirmed commit
Fully capable NETCONF server
abort(ReverseData | null) Either do the equivalent of copy startup to running, or apply the ReverseData to the device.
abort(ReverseData | null). Abort the transaction
abort(ReverseData | null). Abort the transaction
Built in, discard-changes and close.
-
-**COMMIT**: Once all NEDs that get invoked in `commit(Timeout)` reply OK, the transaction is permanently committed to the system. The NED may still reject the change in COMMIT. If any NED rejects the COMMIT, all participants will be invoked in REVERT, NEDs that support confirmed commit with a timeout, Cisco XR may choose to use the provided timeout to make REVERT easy to implement.
-
-
Non transactional devices
Transactional devices
Transactional devices with confirmed commit
Fully capable NETCONF server
commit(Timeout). Do nothing
commit(Timeout). Commit the transaction.
commit(Timeout). Execute commit confirmed [Timeout] on the device.
Built in, commit confirmed with the timeout.
-
-**REVERT**: This state is reached if any NED reports failure in the COMMIT phase. Similar to the ABORT state, the reverse diff is supplied to the NED if the NED has asked for that.
-
-
Non transactional devices
Transactional devices
Transactional devices with confirmed commit
Fully capable NETCONF server
revert(ReverseData | null) Either do the equivalent of copy startup to running, or apply the ReverseData to the device.
revert(ReverseData | null) Either do the equivalent of copy startup to running, or apply the ReverseData to the device.
revert(ReverseData | null). discard-changes
Built in, discard-changes and close.
-
-**PERSIST**: This state is reached at the end of a successful transaction. Here it's the responsibility of the NED to make sure that if the device reboots, the changes are still there.
-
-
Non transactional devices
Transactional devices
Transactional devices with confirmed commit
Fully capable NETCONF server
persist() Either do the equivalent of copy running to startup or nothing.
persist() Either do the equivalent of copy running to startup or nothing.
persist(). confirm.
Built in, commit confirm.
-
-The following state diagram depicts the different states the NED code goes through in the life of a transaction.
-
-
NED Transaction States
-
-## Statistics
-
-NED devices have runtime data and statistics. The first part of being able to collect non-configuration data from a NED device is to model the statistics data we wish to gather. In normal YANG files, it is common to have the runtime data nested inside the configuration data. In gathering runtime data for NED devices we have chosen to separate configuration data and runtime data. In the case of the archetypical CLI device, the `show running-config ...` and friends are used to display the running configuration of the device whereas other different `show ...` commands are used to display runtime data, for example `show interfaces`, `show routes`. Different commands for different types of routers/switches and in particular, different tabular output format for different device types.
-
-To expose runtime data from a NED controlled device, regardless of whether it's a CLI NED or a Generic NED, we need to do two things:
-
-* Write YANG models for the aspects of runtime data we wish to expose northbound in NSO.
-* Write Java NED code that is responsible for collecting that data.
-
-The NSO NED for the Avaya 4k device contains a data model for some real statistics for the Avaya router and also the accompanying Java NED code. Let's start to take a look at the YANG model for the stats portion, we have:
-
-{% code title="Example: NED Stats YANG Model" %}
-```yang
-module tailf-ned-avaya-4k-stats {
- namespace 'http://tail-f.com/ned/avaya-4k-stats';
- prefix avaya4k-stats;
-
- import tailf-common {
- prefix tailf;
- }
- import ietf-inet-types {
- prefix inet;
- }
-
- import ietf-yang-types {
- prefix yang;
- }
-
- container stats {
- config false;
- container interface {
- list gigabitEthernet {
- key "num port";
- tailf:cli-key-format "$1/$2";
-
- leaf num {
- type uint16;
- }
-
- leaf port {
- type uint16;
- }
-
- leaf in-packets-per-second {
- type uint64;
- }
-
- leaf out-packets-per-second {
- type uint64;
- }
-
- leaf in-octets-per-second {
- type uint64;
- }
-
- leaf out-octets-per-second {
- type uint64;
- }
-
- leaf in-octets {
- type uint64;
- }
-
- leaf out-octets {
- type uint64;
- }
-
- leaf in-packets {
- type uint64;
- }
-
- leaf out-packets {
- type uint64;
- }
- }
- }
- }
-}
-```
-{% endcode %}
-
-It's a `config false;` list of counters per interface. We compile the NED stats module with the `--ncs-compile-module` flag or with the `--ncs-compile-bundle` flag. It's the same `non-config` module that contains both runtime data as well as commands and rpcs.
-
-```bash
-$ ncsc --ncs-compile-module avaya4k-stats.yang \
- --ncs-device-dir
-```
-
-The `config false;` data from a module that has been compiled with the `--ncs-compile-module` flag will end up mounted under `/devices/device/live-status` tree. Thus running the NED towards a real router we have:
-
-{% code title="Example: Displaying NED Stats in the CLI" %}
-```cli
-admin@ncs# show devices device r1 live-status interfaces
-
-live-status {
- interface gigabitEthernet1/1 {
- in-packets-per-second 234;
- out-packets-per-second 177;
- in-octets-per-second 4567;
- out-octets-per-second 3561;
- in-octets 12666;
- out-octets 16888;
- in-packets 7892;
- out-packets 2892;
- }
- ............
-```
-{% endcode %}
-
-It is the responsibility of the NED code to populate the data in the live device tree. Whenever a northbound agent tries to read any data in the live device tree for a NED device, the NED code is invoked.
-
-The NED code implements an interface called, `NedConnection` This interface contains:
-
-```
-void showStatsPath(NedWorker w, int th, ConfPath path)
- throws NedException, IOException;
-```
-
-This interface method is invoked by NSO in the NED. The Java code must return what is requested, but it may also return more. The Java code always needs to signal errors by invoking `NedWorker.error()` and success by invoking `NedWorker.showStatsPathResponse()`. The latter function indicates what is returned, and also how long it shall be cached inside NSO.
-
-The reason for this design is that it is common for many `show` commands to work on for example an entire interface, or some other item in the managed device. Say that the NSO operator (or MAAPI code) invokes:
-
-```bash
-admin@host> show status devices device r1 live-status \
- interface gigabitEthernet1/1/1 out-octets
-out-octets 340;
-```
-
-requesting a single leaf, the NED Java code can decide to execute any arbitrary `show` command towards the managed device, parse the output, and populate as much data as it wants. The Java code also decides how long time the NSO shall cache the data.
-
-* When the `showStatsPath()` is invoked, the NED should indicate the state/value of the node indicated by the path (i.e. if a leaf was requested, the NED should write the value of this leaf to the provided transaction handler (th) using MAAPI, or indicate its absence as described below; if a list entry or a presence container was requested then the NED should indicate presence or absence of the element, if the whole list is requested then the NED should populate the keys for this list). Often requesting such data from the actual device will give the NED more data than specifically requested, in which case the worker is free to write other values as well. The NED is not limited to populating the subtree indicated by the path, it may also write values outside this subtree. NSO will then not request those paths but read them directly from the transaction. Different timeouts can be provided for different paths.\
- \
- If a leaf does not have a value or does not exist, the NED can indicate this by returning a TTL for the path to the leaf, without setting the value in the provided transaction. This has changed from earlier versions of NSO. The same applies to optional containers and list entries. If the NED populates the keys for a certain list (both when it is requested to do so or when it decided to do so because it has received this data from the device), it should set the TTL value for the list itself to indicate the time the set of keys should be considered up to date. It may choose to provide different TTL values for some or all list entries, but it is not required to do so.
-
-## Making the NED Handle Default Values Properly
-
-One important task when implementing a NED of any type is to make it mimic the devices handling of default values as close as possible. Network equipment can typically deal with default values in many different ways.
-
-Some devices display default values on leafs even if they have not been explicitly set. Others use trimming, meaning that if a leaf is set to its default value it will be 'unset' and disappear from the devices configuration dump.
-
-It is the responsibility of the NED to make the NSO aware of how the device handles default values. This is done by registering a special NED Capability entry with the NSO. Two modes are currently supported by the NSO: `trim` and `report-all`.
-
-Example 129. A device trimming default values
-
-This is the typical behavior of a Cisco IOS device. The simple YANG code snippet below illustrates the behavior. A container with a boolean leaf. Its default value is true.
-
-```yang
-container aaa {
- leaf enabled {
- default true;
- type boolean;
- }
-}
-```
-
-Try setting the leaf to true in NSO and commit. Then compare the configuration:
-
-```bash
-$ ncs_cli -C -u admin
-```
-
-```bash
-admin@ncs# config
-```
-
-```bash
-admin@ncs(config)# devices device a0 config aaa enabled true
-```
-
-```bash
-admin@ncs(config)# commit
-```
-
-```bash
-Commit complete.
-```
-
-```cli
-admin@ncs(config)# top devices device a0 compare-config
-
-diff
- devices {
- device a0 {
- config {
- aaa {
-- enabled;
- }
- }
- }
-}
-```
-
-The result shows that the configurations differ. The reason is that the device does not display the value of the leaf 'enabled'. It has been trimmed since it has its default value. The NSO is now out of sync with the device.
-
-To solve this issue, make the NED tell the NSO that the device is trimming default values. Register an extra NED Capability entry in the Java code.
-
-```
-NedCapability capas[] = new NedCapability[2];
-capas[0] = new NedCapability(
- "",
- "urn:ios",
- "tailf-ned-cisco-ios",
- Collections.emptyList(),
- "2015-01-01",
- Collections.emptyList());
-capas[1] = new NedCapability(
- "urn:ietf:params:netconf:capability:" +
- "with-defaults:1.0?basic-mode=trim", // Set mode to trim
- "urn:ietf:params:netconf:capability:" +
- "with-defaults:1.0",
- "",
- Collections.emptyList(),
- "",
- Collections.emptyList());
-```
-
-Now, try the same operation again:
-
-```bash
-$ ncs_cli -C -u admin
-```
-
-```cli
-admin@ncs# config
-```
-
-```cli
-admin@ncs(config)# devices device a0 config aaa enabled true
-```
-
-```cli
-admin@ncs(config)# commit
-```
-
-```
-Commit complete.
-```
-
-```cli
-admin@ncs(config)# top devices device a0 compare-config
-```
-
-```cli
-admin@ncs(config)#
-```
-
-The NSO is now in sync with the device.
-
-**Example: A Device Displaying All Default Values**
-
-Some devices display default values for leafs even if they have not been explicitly set. The simple YANG code below will be used to illustrate this behavior. A list containing a key and a leaf with a default value.
-
-```yang
-list interface {
- key id;
- leaf id {
- type string;
- }
- leaf treshold {
- default 20;
- type uint8;
- }
-}
-```
-
-Try creating a new list entry in NSO and commit. Then compare the configuration:
-
-```bash
-$ ncs_cli -C -u admin
-```
-
-```cli
-admin@ncs# config
-```
-
-```cli
-admin@ncs(config)# devices device a0 config interface myinterface
-```
-
-```cli
-admin@ncs(config)# commit
-```
-
-```cli
-admin@ncs(config)# top devices device a0 compare-config
-
-diff
- devices {
- device a0 {
- config {
- interface myinterface {
-+ treshold 20;
- }
- }
- }
- }
-```
-
-The result shows that the configurations differ. The NSO is out of sync. This is because the device displays the default value of the 'threshold' leaf even if it has not been explicitly set through the NSO.
-
-To solve this issue, make the NED tell the NSO that the device is reporting all default values. Register an extra NED Capability entry in the Java code.
-
-```
-NedCapability capas[] = new NedCapability[2];
-capas[0] = new NedCapability(
- "",
- "urn:abc",
- "tailf-ned-abc",
- Collections.emptyList(),
- "2015-01-01",
- Collections.emptyList());
-capas[1] = new NedCapability(
- "urn:ietf:params:netconf:capability:" +
- "with-defaults:1.0?basic-mode=report-all", // Set mode to report-all
- "urn:ietf:params:netconf:capability:" +
- "with-defaults:1.0",
- "",
- Collections.emptyList(),
- "",
- Collections.emptyList());
-```
-
-Now, try the same operation again:
-
-```bash
-$ ncs_cli -C -u admin
-```
-
-```cli
-admin@ncs# config
-```
-
-```
-admin@ncs(config)# devices device a0 config interface myinterface
-```
-
-```cli
-admin@ncs(config)# commit
-```
-
-```
-Commit complete.
-```
-
-```
-admin@ncs(config)# top devices device a0 compare-config
-```
-
-```cli
-admin@ncs(config)#
-```
-
-The NSO is now in sync with the device.
-
-## Dry-run Considerations
-
-The possibility to do a dry-run on a transaction is a feature in NSO that allows to examine the changes to be pushed out to the managed devices in the network. The output can be produced in different formats, namely `cli`, `xml`, and `native`. In order to produce a dry run in the native output format NSO needs to know the exact syntax used by the device, and the task of converting the commands or operations produced by the NSO into the device-specific output belongs the corresponding NED. This is the purpose of the `prepareDry()` callback in the NED interface.
-
-In order to be able to invoke a callback an instance of the NED object needs to be created first. There are two ways to instantiate a NED:
-
-* `newConnection()` callback that tells the NED to establish a connection to the device which can later be used to perform any action such as show configuration, apply changes, or view operational data as well as produce dry-run output.
-* Optional `initNoConnect()` callback that tells the NED to create an instance that would not need to communicate with the device, and hence must not establish a connection or otherwise communicate with the device. This instance will only be used to calculate dry-run output. It is possible for a NED to reject the `initNoConnect()` request if it is not able to calculate the dry-run output without establishing a connection to the device, for example, if a NED is capable of managing devices with different flavors of syntax and it is not known at the moment which syntax is used by this particular device.
-
-The following state diagram displays NED states specific to the dry-run scenario.
-
-
NED Dry-run States
-
-## NED Identification
-
-Each managed device in NSO has a device type, which informs NSO how to communicate with the device. The device type is one of `netconf`, `snmp`, `cli`, or `generic`. In addition, a special `ned-id` identifier is needed.
-
-NSO uses a technique called YANG Schema Mount, where all the data models from a device are mounted into the `/devices` tree in NSO. Each set of mounted data models is completely separated from the others (they are confined to a "mount jail"). This makes it possible to load different versions of the same YANG module for different devices. The functionality is called Common Data Models (CDM).
-
-In most cases, there are many devices running the same software version in the network managed by NSO, thus using the exact same set of YANG modules. With CDM, all YANG modules for a certain device (or family of devices) are contained in a NED package (or just NED for short). If the YANG modules on the device are updated in a backward-compatible way, the NED is also updated.
-
-However, if the YANG modules on the device are updated in an incompatible way in a new version of the device's software, it might be necessary to create a new NED package for the new set of modules. Without CDM, this would not be possible, since there would be two different packages that contained different versions of the same YANG module.
-
-When a NED is being built, its YANG modules are compiled to be mounted into the NSO YANG model. This is done by device compilation of the device's YANG modules and is performed via the `ncsc` tool provided by NSO.
-
-The ned-id identifier is a YANG identity, which must be derived from one of the pre-defined identities in `$NCS_DIR/src/ned/yang/tailf-ncs-ned.yang`.
-
-A YANG model for devices handled by NED code needs to extend the base identity and provide a new identity that can be configured.
-
-{% code title="Example: Defining a User Identity" %}
-```
-import tailf-ncs-ned {
- prefix ned;
-}
-
-identity cisco-ios {
- base ned:cli-ned-id;
-}
-```
-{% endcode %}
-
-The Java NED code registers the identity it handles with NSO.
-
-Similar to how we import device models for NETCONF-based devices, we use the `ncsc --ncs-compile-bundle` command to import YANG models for NED-handled devices.
-
-Once we have imported such a YANG model into NSO, we can configure the managed device in NSO to be handled by the appropriate NED handler (which is user Java code, more on that later)
-
-{% code title="Example: Setting the Device Type" %}
-```cli
-admin@ncs# show running config devices device r1
-
-address 127.0.0.1
-port 2025
-authgroup default
-device-type cli ned-id cisco-ios
-state admin-state unlocked
-...
-```
-{% endcode %}
-
-When NSO needs to communicate southbound towards a managed device which is not of type NETCONF, it will look for a NED that has registered with the name of the identity, in the case above, the string "ios".
-
-Thus before the NSO attempts to connect to a NED device before it tries to sync, or manipulate the configuration of the device, a user-based Java NED code must have registered with the NSO service manager indicating which Java class is responsible for the NED with the string of the identity, in this case, the string "ios". This happens automatically when the NSO Java VM gets a `instantiate-component` request for an NSO package component of type `ned`.
-
-The component Java class `myNed` needs to implement either of the interfaces `NedGeneric` or `NedCli`. Both interfaces require the NED class to implement the following:
-
-{% code title="Example: NED Identification Callbacks" %}
-```
-// should return "cli" or "generic"
-String type();
-
-// Which YANG modules are covered by the class
-String [] modules();
-
-// Which identity is implemented by the class
-String identity();
-```
-{% endcode %}
-
-\
-The above three callbacks are used by the NSO Java VM to connect the NED Java class with NSO. They are called at when the NSO Java VM receives the `instantiate-component request`.
-
-The underlying NedMux will start a number of threads, and invoke the registered class with other data callbacks as transactions execute.
diff --git a/development/advanced-development/developing-neds/cli-ned-development.md b/development/advanced-development/developing-neds/cli-ned-development.md
deleted file mode 100644
index 33a7cb1a..00000000
--- a/development/advanced-development/developing-neds/cli-ned-development.md
+++ /dev/null
@@ -1,3725 +0,0 @@
----
-description: Create CLI NEDs.
----
-
-# CLI NED Development
-
-The CLI NED is a model-driven way to CLI script towards all Cisco-like devices. Some Java code is necessary for handling the corner cases a human-to-machine interface presents.
-
-See the [examples.ncs/device-manager/cli-ned](https://github.com/NSO-developer/nso-examples/tree/6.6/device-management/cli-ned) for an example of a Java implementation serving any YANG models, including those that come with the example.
-
-The NSO CLI NED southbound of NSO shares a Cisco-style CLI engine with the northbound NSO CLI interface, and the CLI engine can thus run in both directions, producing CLI southbound and interpreting CLI data coming from southbound while presenting a CLI interface northbound. It is helpful to keep this in mind when learning and working with CLI NEDs.
-
-* A sequence of Cisco CLI commands can be turned into the equivalent manipulation of the internal XML tree that represents the configuration inside NSO.
-
- A YANG model, annotated appropriately, will produce a Cisco CLI. The user can enter Cisco commands, and NSO will parse the Cisco CLI commands using the annotated YANG model and change the internal XML tree accordingly. Thus, this is the CLI parser and interpreter. Model-driven.
-* The reverse operation is also possible. Given two different XML trees, each representing a configuration state, in the netsim/ConfD case and NSO's northbound CLI interface, it represents the configuration of a single device, i.e., the device using ConfD as a management framework. In contrast, the NSO case represents the entire network configuration and can generate the list of Cisco commands going from one XML tree to another.
-
- NSO uses this technology to generate CLI commands southbound when we manage Cisco-like devices.
-
-It will become clear later in the examples how the CLI engine runs in forward and reverse mode. The key point though, is that the Cisco CLI NED Java programmer doesn't have to understand and parse the structure of the CLI; this is entirely done by the NSO CLI engine.
-
-To implement a CLI NED, the following components are required:
-
-* A YANG data model that describes the CLI. An important development tool here is netsim (ConfD), the Tail-f on-device management toolkit. For NSO to manage a CLI device, it needs a YANG file with exactly the right annotations to produce precisely the managed device's CLI. A few examples exist in the NSO NED evaluation collection with annotated YANG models that render different Cisco CLI variants.
-
- \
- See, for example, `$NCS_DIR/packages/neds/dell-ftos` and `$NCS_DIR/packages/neds/cisco-nx`. Look for `tailf:cli-*` extensions in the NED `src/yang` directory YANG models.
-
- \
- Thus, to create annotated YANG files for a device with a Cisco-like CLI, the work procedure is to run netsim (ConfD) and write a YANG file that renders the correct CLI.
-
- \
- Furthermore, this YANG model must declare an identity with `ned:cli-ned-id` as a base.
-* It is important to note that a NED only needs to cover certain aspects of the device. To have NSO manage a device with a Cisco-like CLI you do not have to model the entire device, only the commands intended to be used need to be covered. When the `show()` callback issues its `show running-config [toptag]` command and the device replies with data that is fed to NSO, NSO will ignore all command dump output that the loaded YANG models do not cover.
-
- \
- Thus, whichever Cisco-like device we wish to manage, we must first have YANG models from NSO that cover all aspects of the device we want to use. Once we have a YANG model, we load it into NSO and modify the example CLI NED class to return the NedCapability list of the device.
-* The NED code gets to see all data from and to the device. If it's impossible or too hard to get the YANG model exactly right for all commands, a last resort is to let the NED code modify the data inline.
-* The next thing required is a Java class that implements the NED. This is typically not a lot of code, and the existing example NED Java classes are easily extended and modified to fit other needs. The most important point of the Java NED class code is that the code can be oblivious to the CLI commands sent and received.
-
-Java CLI NED code must implement the `CliNed` interface.
-
-* **`NedConnectionBase.java`**. See `$NCS_DIR/java/jar/ncs-src.jar`. Use jar xf ncs-src.jar to extract the JAR file. Look for `src/com/tailf/ned/NedConnectionBase.java`.
-* **`NedCliBase.java`**. See `$NCS_DIR/java/jar/ncs-src.jar`. Use jar xf ncs-src.jar to extract the JAR file. Look for `src/com/tailf/ned/NedCliBase.java`.
-
-Thus, the Java NED class has the following responsibilities.
-
-* It must implement the identification callbacks, i.e `modules()`, `type()`, and `identity()`
-* It must implement the connection-related callback methods `newConnection()`, `isConnection()` and `reconnect()`
-
- \
- NSO will invoke the `newConnection()` when it requires a connection to a managed device. The `newConnection()` method is responsible for connecting to the device, figuring out exactly what type of device it is, and returning an array of `NedCapability` objects.\\
-
- ```java
- public class NedCapability {
-
- public String str;
- public String uri;
- public String module;
- public String features;
- public String revision;
- public String deviations;
-
- ....
- ```
-
- This is very much in line with how a NETCONF connect works and how the NETCONF client and server exchange hello messages.
-* Finally, the NED code must implement a series of data methods. For example, the method `void prepare(NedWorker w, String data)` get a `String` object which is the set of Cisco CLI commands it shall send to the device.
-
- \
- In the other direction, when NSO wants to collect data from the device, it will invoke `void show(NedWorker w, String toptag)` for each tag found at the top of the data model(s) loaded for that device. For example, if the NED gets invoked with `show(w, "interface")` it's responsibility is to invoke the relevant show configuration command for "interface", i.e. `show running-config interface` over the connection to the device, and then dumbly reply with all the data the device replies with. NSO will parse the output data and feed it into its internal XML trees.
-
- \
- NSO can order the `showPartial()` to collect part of the data if the NED announces the capability `http://tail-f.com/ns/ncs-ned/show-partial?path-format=FORMAT` in which FORMAT is of the following:
-
- * key-path: support regular instance keypath format.
- * top-tag: support top tags under the `/devices/device/config` tree.
- * cmd-path-full: support Cisco's CLI edit path with instances.
- * path-modes-only: support Cisco CLI mode path.
- * cmd-path-modes-only-existing: same as `path-mode-only` but NSO only supplies the path mode of existing nodes.
-
-## Writing a Data Model for a CLI NED
-
-The idea is to write a YANG data model and feed that into the NSO CLI engine such that the resulting CLI mimics that of the device to manage. This is fairly straightforward once you have understood how the different constructs in YANG are mapped into CLI commands. The data model usually needs to be annotated with a specific Tail-f CLI extension to tailor exactly how the CLI is rendered.
-
-This section will describe how the general principles work and give a number of cookbook-style examples of how certain CLI constructs are modeled.
-
-The CLI NED is primarily designed to be used with devices that has a CLI that is similar to the CLIs on a typical Cisco box (i.e. IOS, XR, NX-OS, etc). However, if the CLI follows the same principles but with a slightly different syntax, it may still be possible to use a CLI NED if some of the differences are handled by the Java part of the CLI NED. This section will describe how this can be done.
-
-Let's start with the basic data model for CLI mapping. YANG consists of three major elements: containers, lists, and leaves. For example:
-
-```yang
-container interface {
-list ethernet {
- key id;
-
- leaf id {
- type uint16 {
- range "0..66";
- }
- }
-
- leaf description {
- type string {
- length "1..80";
- }
- }
-
- leaf mtu {
- type uint16 {
- range "64..18000";
- }
- }
-}
-}
-```
-
-The basic rendering of the constructs is as follows. Containers are rendered as command prefixes which can be stacked at any depth. Leaves are rendered as commands that take one parameter. Lists are rendered as submodes, where the key of the list is rendered as a submode parameter. The example above would result in the command:
-
-```
-interface ethernet ID
-```
-
-For entering the interface ethernet submode. The interface is a container and is rendered as a prefix, ethernet is a list and is rendered as a submode. Two additional commands would be available in the submode:
-
-```
-description WORD
-mtu INTEGER<64-18000>
-```
-
-A typical configuration with two interfaces could look like this:
-
-```
-interface ethernet 0
-description "customer a"
-mtu 1400
-!
-interface ethernet 1
-description "customer b"
-mtu 1500
-!
-```
-
-Note that it makes sense to add help texts to the data model since these texts will be visible in the NSO and help the user see the mapping between the J-style CLI in the NSO and the CLI on the target device. The data model above may look like the following with proper help texts.
-
-```yang
-container interface {
-tailf:info "Configure interfaces";
-
-list ethernet {
- tailf:info "FastEthernet IEEE 802.3";
- key id;
-
- leaf id {
- type uint16 {
- range "0..66";
- tailf:info "<0-66>;;FastEthernet interface number";
- }
-
- leaf description {
- type string {
- length "1..80";
- tailf:info "LINE;;Up to 80 characters describing this interface";
- }
- }
-
- leaf mtu {
- type uint16 {
- range "64..18000";
- tailf:info "<64-18000>;;MTU size in bytes";
- }
- }
-}
-}
-```
-
-I will generally not include the help texts in the examples below to save some space but they should be present in a production data model.
-
-## Tweaking the Basic Rendering Scheme
-
-The basic rendering suffice in many cases but is also not enough in many situations. What follows is a list of ways to annotate the data model in order to make the CLI engine mimic a device.
-
-### **Suppressing Submodes**
-
-Sometimes you want a number of instances (a list) but do not want a submode. For example:
-
-```yang
-container dns {
-leaf domain {
- type string;
-}
-list server {
- ordered-by user;
- tailf:cli-suppress-mode;
- key ip;
-
- leaf ip {
- type inet:ipv4-address;
- }
-}
-}
-```
-
-The above would result in the following commands:
-
-```
-dns domain WORD
-dns server IPAddress
-```
-
-A typical `show-config` output may look like:
-
-```
-dns domain tail-f.com
-dns server 192.168.1.42
-dns server 8.8.8.8
-```
-
-### **Adding a Submode**
-
-Sometimes you want a submode to be created without having a list instance, for example, a submode called `aaa` where all AAA configuration is located.
-
-This is done by using the `tailf:cli-add-mode` extension. For example:
-
-```yang
-container aaa {
- tailf:info "AAA view";
- tailf:cli-add-mode;
- tailf:cli-full-command;
-
- ...
-}
-```
-
-This would result in the command **aaa** for entering the container. However, sometimes the CLI requires that a certain set of elements are also set when entering the submode, but without being a list. For example, the police rules inside a policy map in the Cisco 7200.
-
-```yang
-container police {
- // To cover also the syntax where cir, bc and be
- // doesn't have to be explicitly specified
- tailf:info "Police";
- tailf:cli-add-mode;
- tailf:cli-mode-name "config-pmap-c-police";
- tailf:cli-incomplete-command;
- tailf:cli-compact-syntax;
- tailf:cli-sequence-commands {
- tailf:cli-reset-siblings;
- }
- leaf cir {
- tailf:info "Committed information rate";
- tailf:cli-hide-in-submode;
- type uint32 {
- range "8000..2000000000";
- tailf:info "<8000-2000000000>;;Bits per second";
- }
- }
- leaf bc {
- tailf:info "Conform burst";
- tailf:cli-hide-in-submode;
- type uint32 {
- range "1000..512000000";
- tailf:info "<1000-512000000>;;Burst bytes";
- }
- }
- leaf be {
- tailf:info "Excess burst";
- tailf:cli-hide-in-submode;
- type uint32 {
- range "1000..512000000";
- tailf:info "<1000-512000000>;;Burst bytes";
- }
- }
- leaf conform-action {
- tailf:cli-break-sequence-commands;
- tailf:info "action when rate is less than conform burst";
- type police-action-type;
- }
- leaf exceed-action {
- tailf:info "action when rate is within conform and "+
- "conform + exceed burst";
- type police-action-type;
- }
- leaf violate-action {
- tailf:info "action when rate is greater than conform + "+
- "exceed burst";
- type police-action-type;
- }
-}
-```
-
-Here, the leaves with the annotation `tailf:cli-hide-in-submode` is not present as commands once the submode has been entered, but are instead only available as options the police command when entering the police submode.
-
-### **Commands with Multiple Parameters**
-
-Often a command is defined as taking multiple parameters in a typical Cisco CLI. This is achieved in the data model by using the annotations `tailf:cli-sequence-commands`, `tailf:cli-compact-syntax`, `tailf:cli-drop-node-name`, and possibly `tailf:cli-reset-siblings`.
-
-For example:
-
-```yang
-container udld-timeout {
- tailf:info "LACP unidirectional-detection timer";
- tailf:cli-sequence-commands {
- tailf:cli-reset-all-siblings;
- }
- tailf:cli-compact-syntax;
- leaf "timeout-type" {
- tailf:cli-drop-node-name;
- type enumeration {
- enum fast {
- tailf:info "in unit of milli-seconds";
- }
- enum slow {
- tailf:info "in unit of seconds";
- }
- }
- }
- leaf "milli" {
- tailf:cli-drop-node-name;
- when "../timeout-type = 'fast'" {
- tailf:dependency "../timeout-type";
- }
- type uint16 {
- range "100..1000";
- tailf:info "<100-1000>;;timeout in unit of "
- +"milli-seconds";
- }
- }
- leaf "secs" {
- tailf:cli-drop-node-name;
- when "../timeout-type = 'slow'" {
- tailf:dependency "../timeout-type";
- }
- type uint16 {
- range "1..60";
- tailf:info "<1-60>;;timeout in unit of seconds";
- }
- }}
-```
-
-This results in the command:
-
-```
-udld-timeout [fast | slow ]
-```
-
-The `tailf:cli-sequence-commands` annotation tells the CLI engine to process the leaves in sequence. The `tailf:cli-reset-siblings` tells the CLI to reset all leaves in the container if one is set. This is necessary in order to ensure that no lingering config remains from a previous invocation of the command where more parameters were configured. The `tailf:cli-drop-node-name` tells the CLI that the leaf name shouldn't be specified. The `tailf:cli-compact-syntax` annotation tells the CLI that the leaves should be formatted on one line, i.e. as:
-
-```
-udld-timeout fast 1000
-```
-
-As opposed to without the annotation:
-
-```
-uldl-timeout fast
-uldl-timeout 1000
-```
-
-When constructs are used to control if the numerical value should be the `milli` or the `secs` leaf.
-
-This command could also be written using a choice construct as:
-
-```yang
-container udld-timeout {
-tailf:cli-sequence-command;
-choice udld-timeout-choice {
- case fast-case {
- leaf fast {
- tailf:info "in unit of milli-seconds";
- type empty;
- }
- leaf milli {
- tailf:cli-drop-node-name;
- must "../fast" { tailf:dependency "../fast"; }
- type uint16 {
- range "100..1000";
- tailf:info "<100-1000>;;timeout in unit of "
- +"milli-seconds";
- }
- mandatory true;
- }
- }
- case slow-case {
- leaf slow {
- tailf:info "in unit of milli-seconds";
- type empty;
- }
- leaf "secs" {
- must "../slow" { tailf:dependency "../slow"; }
- tailf:cli-drop-node-name;
- type uint16 {
- range "1..60";
- tailf:info "<1-60>;;timeout in unit of seconds";
- }
- mandatory true;
- }
- }
-}
-}
-```
-
-Sometimes the `tailf:cli-incomplete-command` is used to ensure that all parameters are configured. The `cli-incomplete-command` only applies to the C- and I-style CLI. To ensure that prior leaves in a container are also configured when the configuration is written using J-style or Netconf proper 'must' declarations should be used.
-
-Another example is this, where `tailf:cli-optional-in-sequence` is used:
-
-```yang
-list pool {
- tailf:cli-remove-before-change;
- tailf:cli-suppress-mode;
- tailf:cli-sequence-commands {
- tailf:cli-reset-all-siblings;
- }
- tailf:cli-compact-syntax;
- tailf:cli-incomplete-command;
- key name;
- leaf name {
- type string {
- length "1..31";
- tailf:info "WORD Pool Name or Pool Group";
- }
- }
- leaf ipstart {
- mandatory true;
- tailf:cli-incomplete-command;
- tailf:cli-drop-node-name;
- type inet:ipv4-address {
- tailf:info "A.B.C.D;;Start IP Address of NAT pool";
- }
- }
- leaf ipend {
- mandatory true;
- tailf:cli-incomplete-command;
- tailf:cli-drop-node-name;
- type inet:ipv4-address {
- tailf:info "A.B.C.D;;End IP Address of NAT pool";
- }
- }
- leaf netmask {
- mandatory true;
- tailf:info "Configure Mask for Pool";
- type string {
- tailf:info "/nn or A.B.C.D;;Configure Mask for Pool";
- }
- }
-
- leaf gateway {
- tailf:info "Gateway IP";
- tailf:cli-optional-in-sequence;
- type inet:ipv4-address {
- tailf:info "A.B.C.D;;Gateway IP";
- }
- }
- leaf ha-group-ip {
- tailf:info "HA Group ID";
- tailf:cli-optional-in-sequence;
- type uint16 {
- range "1..31";
- tailf:info "<1-31>;;HA Group ID 1 to 31";
- }
- }
- leaf ha-use-all-ports {
- tailf:info "Specify this if services using this NAT pool "
- +"are transaction based (immediate aging)";
- tailf:cli-optional-in-sequence;
- type empty;
- when "../ha-group-ip" {
- tailf:dependency "../ha-group-ip";
- }
- }
- leaf vrid {
- tailf:info "VRRP vrid";
- tailf:cli-optional-in-sequence;
- when "not(../ha-group-ip)" {
- tailf:dependency "../ha-group-ip";
- }
- type uint16 {
- range "1..31";
- tailf:info "<1-31>;;VRRP vrid 1 to 31";
- }
- }
-
- leaf ip-rr {
- tailf:info "Use IP address round-robin behavior";
- type empty;
- }
-}
-```
-
-The `tailf:cli-optional-in-sequence` means that the parameters should be processed in sequence but a parameter can be skipped. However, if a parameter is specified then only parameters later in the container can follow it.
-
-It is also possible to have some parameters in sequence initially in the container, and then the rest in any order. This is indicated by the `tailf:cli-break-sequence command`. For example:
-
-```yang
-list address {
- key ip;
- tailf:cli-suppress-mode;
- tailf:info "Set the IP address of an interface";
- tailf:cli-sequence-commands {
- tailf:cli-reset-all-siblings;
- }
- tailf:cli-compact-syntax;
- leaf ip {
- tailf:cli-drop-node-name;
- type inet:ipv6-prefix;
- }
- leaf link-local {
- type empty;
- tailf:info "Configure an IPv6 link local address";
- tailf:cli-break-sequence-commands;
- }
- leaf anycast {
- type empty;
- tailf:info "Configure an IPv6 anycast address";
- tailf:cli-break-sequence-commands;
- }
-}
-```
-
-Where it is possible to write:
-
-```
- ip 1.1.1.1 link-local anycast
-```
-
-As well as:
-
-```
- ip 1.1.1.1 anycast link-local
-```
-
-### **Leaf Values Not Really Part of the Key**
-
-Sometimes a command for entering a submode has parameters that are not really key values, i.e. not part of the instance identifier, but still need to be given when entering the submode. For example
-
-```yang
-list service-group {
- tailf:info "Service Group";
- tailf:cli-remove-before-change;
- key "name";
- leaf name {
- type string {
- length "1..63";
- tailf:info "NAME;;SLB Service Name";
- }
- }
- leaf tcpudp {
- mandatory true;
- tailf:cli-drop-node-name;
- tailf:cli-hide-in-submode;
- type enumeration {
- enum tcp { tailf:info "TCP LB service"; }
- enum udp { tailf:info "UDP LB service"; }
- }
- }
-
- leaf backup-server-event-log {
- tailf:info "Send log info on back up server events";
- tailf:cli-full-command;
- type empty;
- }
- leaf extended-stats {
- tailf:info "Send log info on back up server events";
- tailf:cli-full-command;
- type empty;
- }
- ...
-}
-```
-
-In this case, the `tcpudp` is a non-key leaf that needs to be specified as a parameter when entering the `service-group` submode. Once in the submode the commands backup-server-event-log and extended-stats are present. Leaves with the `tailf:cli-hide-in-submode` attribute are given after the last key, in the sequence they appear in the list.
-
-It is also possible to allow leaf values to be entered in between key elements. For example:
-
-```yang
-list community {
- tailf:info "Define a community who can access the SNMP engine";
- key "read remote";
- tailf:cli-suppress-mode;
- tailf:cli-compact-syntax;
- tailf:cli-reset-container;
- leaf read {
- tailf:cli-expose-key-name;
- tailf:info "read only community";
- type string {
- length "1..31";
- tailf:info "WORD;;SNMPv1/v2c community string";
- }
- }
- leaf remote {
- tailf:cli-expose-key-name;
- tailf:info "Specify a remote SNMP entity to which the user belongs";
- type string {
- length "1..31";
- tailf:info "Hostname or A.B.C.D;;IP address of remote SNMP "
- +"entity(length: 1-31)";
- }
- }
-
- leaf oid {
- tailf:info "specific the oid"; // SIC
- tailf:cli-prefix-key {
- tailf:cli-before-key 2;
- }
- type string {
- length "1..31";
- tailf:info "WORD;;The oid qvalue";
- }
- }
-
- leaf mask {
- tailf:cli-drop-node-name;
- type string {
- tailf:info "/nn or A.B.C.D;;The mask";
- }
- }
-}
-```
-
-Here we have a list that is not mapped to a submode. It has two keys, read and remote, and an optional oid that can be specified before the remote key. Finally, after the last key, an optional mask parameter can be specified. The use of the `tailf:cli-expose-key-name` means that the key names should be part of the command, which they are not by default. The above construct results in the commands:
-
-```
-community read WORD [oid WORD] remote HOSTNAME [/nn or A.B.C.D]
-```
-
-The `tailf:cli-reset-container` attribute means that all leaves in the container will be reset if any leaf is given.
-
-### **Change Controlling Annotations**
-
-Some devices require that a setting be removed before it can be changed, for example, the service-group list above. This is indicated with the `tailf:cli-remove-before-change` annotation. It can be used both on lists and on leaves. A leaf example:
-
-```yang
-leaf source-ip {
- tailf:cli-remove-before-change;
- tailf:cli-no-value-on-delete;
- tailf:cli-full-command;
- type inet:ipv6-address {
- tailf:info "X:X::X:X;;Source IPv6 address used by DNS";
- }
-}
-```
-
-This means that the diff sent to the device will contain first a `no source-ip` command, followed by a new `source-ip` command to set the new value.
-
-The data model also use the tailf:cli-no-value-on-delete annotation which means that the leaf value should not be present in the no command. With the annotation, a diff to modify the source IP from 1.1.1.1 to 2.2.2.2 would look like:
-
-```
-no source-ip
-source-ip 2.2.2.2
-```
-
-And, without the annotation as:
-
-```
-no source-ip 1.1.1.1
-source-ip 2.2.2.2
-```
-
-### **Ordered-by User Lists**
-
-By default, a diff for an ordered-by-user list contains information about where a new item should be inserted. This is typically not supported by the device. Instead, the commands (diff) to send the device needs to remove all items following the new item, and then reinsert the items in the proper order. This behavior is controlled using the `tailf:cli-long-obu-diff` annotation. For example
-
-```yang
-list access-list {
- tailf:info "Configure Access List";
- tailf:cli-suppress-mode;
- key id;
- leaf id {
- type uint16 {
- range "1..199";
- }
- }
- list rules {
- ordered-by user;
- tailf:cli-suppress-mode;
- tailf:cli-drop-node-name;
- tailf:cli-show-long-obu-diffs;
- key "txt";
- leaf txt {
- tailf:cli-multi-word-key;
- type string;
- }
- }
-}
-```
-
-Suppose we have the access list:
-
-```
-access-list 90 permit host 10.34.97.124
-access-list 90 permit host 172.16.4.224
-```
-
-And we want to change this to:
-
-```
-access-list 90 permit host 10.34.97.124
-access-list 90 permit host 10.34.94.109
-access-list 90 permit host 172.16.4.224
-```
-
-We would generate the diff with the `tailf:cli-long-obu-diff`:
-
-```
-no access-list 90 permit host 172.16.4.224
-access-list 90 permit host 10.34.94.109
-access-list 90 permit host 172.16.4.224
-```
-
-Without the annotation, the diff would be:
-
-```bash
-# after permit host 10.34.97.124
-access-list 90 permit host 10.34.94.109
-```
-
-### **Default Values**
-
-Often in a config when a leaf is set to its default value it is not displayed by the `show running-config` command, but we still need to set it explicitly. Suppose we have the leaf `state`. By default, the value is `active`.
-
-```yang
-leaf state {
- tailf:info "Activate/Block the user(s)";
- type enumeration {
- enum active {
- tailf:info "Activate/Block the user(s)";
- }
- enum block {
- tailf:info "Activate/Block the user(s)";
- }
- }
- default "active";
-}
-```
-
-If the device state is `block` and we want to set it to `active`, i.e. the default value. The default behavior is to send to the device:
-
-```
-no state block
-```
-
-This will not work. The correct command sequence should be:
-
-```
-state active
-```
-
-The way to achieve this is to do the following:
-
-```yang
-leaf state {
- tailf:info "Activate/Block the user(s)";
- type enumeration {
- enum active {
- tailf:info "Activate/Block the user(s)";
- }
- enum block {
- tailf:info "Activate/Block the user(s)";
- }
- }
- default "active";
- tailf:cli-trim-default;
- tailf:cli-show-with-default;
-}
-```
-
-This way a value for 'state' will always be generated. This may seem unintuitive but the reason this works comes from how the diff is calculated. When generating the diff the target configuration and the desired configuration is compared (per line). The target config will be:
-
-```
-state block
-```
-
-And the desired config will be:
-
-```
-state active
-```
-
-This will be interpreted as a leaf value change and the resulting diff will be to set the new value, i.e. active.
-
-However, without the `cli-show-with-default` option, the desired config will be an empty line, i.e. no value set. When we compare the two lines we get:
-
-(current config)
-
-```
-state block
-```
-
-(desired config)
-
-```xml
-
-```
-
-This will result in the command to remove the configured leaf, i.e.
-
-```
-state block
-```
-
-Which does not work.
-
-### **Understanding How the Diffs are Generated**
-
-What you see in the C-style CLI when you do 'show configuration' is the commands needed to go from the running config to the configuration you have in your current session. It usually corresponds to the command you have just issued in your CLI session, but not always.
-
-The output is actually generated by comparing the two configurations, i.e. the running config and your current uncommitted configuration. It is done by running 'show running-config' on both the running config and your uncommitted config, and then comparing the output line by line. Each line is complemented by some meta information which makes it possible to generate a better diff.
-
-For example, if you modify a leaf value, say set the MTU to 1400 and the previous value was 1500. The two configs will then be
-
-```
-interface FastEthernet0/0/1 interface FastEthernet0/0/1
-mtu 1500 mtu 1400
-! !
-```
-
-When we compare these configs, the first lines are the same -> no action but we remember that we have entered the FastEthernet0/0/1 submode. The second line differs in value (the meta-information associated with the lines has the path and the value). When we analyze the two lines we determine that a value\_set has occurred. The default action when the value has been changed is to output the command for setting the new value, i.e. MTU 1500. However, we also need to reposition to the current submode. If this is the first line we are outputting in the submode we need to issue the command before issuing the MTU 1500 command.
-
-```
-interface FastEthernet0/0/1
-```
-
-Similarly, suppose a value has been removed, i.e. mtu used to be set but it is no longer present
-
-```
-interface FastEthernet0/0/1 interface FastEthernet0/0/1
-! mtu 1400
- !
-```
-
-As before, the first lines are equivalent, but the second line has a `!` in the new config, and MTU 1400 in the running config. This is analyzed as being a delete and the commands are generated:
-
-```
-interface FastEthernet0/0/1
- no mtu 1400
-```
-
-There are tweaks to this behavior. For example, some machines do not like the `no` command to include the old value but want instead the command:
-
-```
-no mtu
-```
-
-We can instruct the CLI diff engine to behave in this way by using the YANG annotation `tailf:cli-no-value-on-delete;`:
-
-```yang
-leaf mtu {
-tailf:cli-no-value-on-delete;
-type uint16;
-}
-```
-
-It is also possible to tell the CLI engine to not include the element name in the delete operation. For example the command:
-
-```
-aaa local-user password cipher "C>9=UF*^V/'Q=^Q`MAF4<1!!"
-```
-
-But the command to delete the password is:
-
-```
-no aaa local-user password
-```
-
-The data model for this would be:
-
-```
-// aaa local-user
-container password {
- tailf:info "Set password";
- tailf:cli-flatten-container;
- leaf cipher {
- tailf:cli-no-value-on-delete;
- tailf:cli-no-name-on-delete;
- type string {
- tailf:info "STRING<1-16>/<24>;;The UNENCRYPTED/"
- +"ENCRYPTED password string";
- }
- }
-}
-```
-
-## Modifying the Java Part of the CLI NED
-
-It is often necessary to do some minor modifications to the Java part of a CLI NED. There are mainly four functions that needs to be modified: connect, show, applyConfig, and enter/exit config mode.
-
-### **Connecting to a Device**
-
-The CLI NED code should do a few things when the connect callback is invoked.
-
-* Set up a connection to the device (usually SSH).
-* If necessary send a secondary password to enter exec mode. Typically a Cisco IOS-like CLI requires the user to give the `enable` command followed by a password.
-* Verify that it is the right kind of device and respond to NSO with a list of capabilities. This is usually done by running the `show version` command, or equivalent, and parsing the output.
-* Configure the CLI session on the device to not use pagination. This is normally done by setting the screen length to 0 (or infinity or disable). Optionally it may also fiddle with the idle time.
-
-Some modifications may be needed in this section if the commands for the above differ from the Cisco IOS style.
-
-### **Displaying the Configuration of a Device**
-
-The NSO will invoke the `show()` callback multiple times, one time for each top-level tag in the data model. Some devices have support for displaying just parts of the configuration, others do not.
-
-For a device that cannot display only parts of a config the recommended strategy is to wait for a show() invocation with a well known top tag and send the entire config at that point. If, if you know that the data model has a top tag called **interface** then you can use code like:
-
-```java
-public void show(NedWorker worker, String toptag)
- throws NedException, IOException {
- session.setTracer(worker);
- try {
- int i;
-
- if (toptag.equals("interface")) {
- session.print("show running-config | exclude able-management\n");
- ...
- } else {
- worker.showCliResponse("");
- }
- } catch (...) { ... }
-}
-```
-
-From the point of NSO, it is perfectly ok to send the entire config as a response to one of the requested toptags and to send an empty response otherwise.
-
-Often some filtering is required of the output from the device. For example, perhaps part of the configuration should not be sent to NSO, or some keywords replaced with others. Here are some examples:
-
-#### Stripping Sections, Headers, and Footers
-
-Some devices start the output from `show running-config` with a short header, and some add a footer. Common headers are `Current configuration:` and a footer may be `end` or `return`. In the example below we strip out a header and remove a footer.
-
-```
-if (toptag.equals("interface")) {
- session.print("show running-config | exclude able-management\n");
- session.expect("show running-config | exclude able-management");
-
- String res = session.expect(".*#");
-
- i = res.indexOf("Current configuration :");
- if (i >= 0) {
- int n = res.indexOf("\n", i);
- res = res.substring(n+1);
- }
-
- i = res.lastIndexOf("\nend");
- if (i >= 0) {
- res = res.substring(0,i);
- }
-
- worker.showCliResponse(res);
-} else {
- // only respond to first toptag since the A10
- // cannot show different parts of the config.
- worker.showCliResponse("");
-}
-```
-
-Also, you may choose to only model part of a device configuration in which case you can strip out the parts that you have not modelled. For example, stripping out the SNMP configuration:
-
-```
-if (toptag.equals("context")) {
- session.print("show configuration\n");
- session.expect("show configuration");
-
- String res = session.expect(".*\\[.*\\]#");
-
- snmp = res.indexOf("\nsnmp");
- home = res.indexOf("\nsession-home");
- port = res.indexOf("\nport");
- tunnel = res.indexOf("\ntunnel");
-
- if (snmp >= 0) {
- res = res.substring(0,snmp)+res.substring(home,port)+
- res.substring(tunnel);
- } else if (port >= 0) {
- res = res.substring(0,port)+res.substring(tunnel);
- }
-
- worker.showCliResponse(res);
-} else {
- // only respond to first toptag since the STOKEOS
- // cannot show different parts of the config.
- worker.showCliResponse("");
-}
-```
-
-#### Removing Keywords
-
-Sometimes a device generates non-parsable commands in the output from `show running-config`. For example, some A10 devices add a keyword `cpu-process` at the end of the `ip route` command, i.e.:
-
-```
- ip route 10.40.0.0 /14 10.16.156.65 cpu-process
-```
-
-However, it does not accept this keyword when a route is configured. The solution is to simply strip the keyword before sending the config to NSO and to not include the keyword in the data model for the device. The code to do this may look like this:
-
-```
-if (toptag.equals("interface")) {
- session.print("show running-config | exclude able-management\n");
- session.expect("show running-config | exclude able-management");
-
- String res = session.expect(".*#");
-
- // look for the string cpu-process and remove it
- i = res.indexOf(" cpu-process");
- while (i >= 0) {
- res = res.substring(0,i)+res.substring(i+12);
- i = res.indexOf(" cpu-process");
- }
-
- worker.showCliResponse(res);
-} else {
- // only respond to first toptag since the A10
- // cannot show different parts of the config.
- worker.showCliResponse("");
-}
-```
-
-#### Replacing Keywords
-
-Sometimes a device has some other names for delete than the standard **no** command found in a typical Cisco CLI. NSO will only generate **no** commands when, for example, an element does not exist (i.e. `no shutdown` for an interface), but the device may need `undo` instead. This can be dealt with as a simple transformation of the configuration before sending it to NSO. For example:
-
-```
-if (toptag.equals("aaa")) {
- session.print("display current-config\n");
- session.expect("display current-config");
-
- String res = session.expect("return");
-
- session.expect(".*>");
-
- // split into lines, and process each line
- lines = res.split("\n");
-
- for(i=0 ; i < lines.length ; i++) {
- int c;
- // delete the version information, not really config
- if (lines[i].indexOf("version ") == 1) {
- lines[i] = "";
- }
- else if (lines[i].indexOf("undo ") >= 0) {
- lines[i] = lines[i].replaceAll("undo ", "no ");
- }
- }
-
- worker.showCliResponse(join(lines, "\n"));
-} else {
- // only respond to first toptag since the H3C
- // cannot show different parts of the config.
- // (well almost)
- worker.showCliResponse("");
-}
-```
-
-Another example is the following situation. A device has a configuration for `port trunk permit vlan 1-3` and may at the same time have disallowed some VLANs using the command `no port trunk permit vlan 4-6`. Since we cannot use a **no** container in the config, we instead add a `disallow` container, and then rely on the Java code to do some processing, e.g.:
-
-```yang
-container disallow {
- container port {
- tailf:info "The port of mux-vlan";
- container trunk {
- tailf:info "Specify current Trunk port's "
- +"characteristics";
- container permit {
- tailf:info "allowed VLANs";
- leaf-list vlan {
- tailf:info "allowed VLAN";
- tailf:cli-range-list-syntax;
- type uint16 {
- range "1..4094";
- }
- }
- }
- }
- }
-}
-```
-
-And, in the Java `show()` code:
-
-```
-if (toptag.equals("aaa")) {
- session.print("display current-config\n");
- session.expect("display current-config");
-
- String res = session.expect("return");
-
- session.expect(".*>");
-
- // process each line
- lines = res.split("\n");
-
- for(i=0 ; i < lines.length ; i++) {
- int c;
- if (lines[i].indexOf("no port") >= 0) {
- lines[i] = lines[i].replaceAll("no ", "disallow ");
- }
- }
-
- worker.showCliResponse(join(lines, "\n"));
-} else {
- // only respond to first toptag since the H3C
- // cannot show different parts of the config.
- // (well almost)
- worker.showCliResponse("");
-}
-```
-
-A similar transformation needs to take place when the NSO sends a configuration change to the device. A more detailed discussion about apply config modifications follows later but the corresponding code would in this case be:
-
-```
-lines = data.split("\n");
-for (i=0 ; i < lines.length ; i++) {
- if (lines[i].indexOf("disallow port ") == 0) {
- lines[i] = lines[i].replace("disallow ", "undo ");
- }
-}
-```
-
-#### Different Quoting Practices
-
-If the way a device quotes strings differ from the way it can be modeled in NSO, it can be handled in the Java code. For example, one device does not quote encrypted password strings which may contain odd characters like the command character `!`. Java code to deal with this may look like:
-
-```
-if (toptag.equals("aaa")) {
- session.print("display current-config\n");
- session.expect("display current-config");
-
- String res = session.expect("return");
-
- session.expect(".*>");
-
- // process each line
- lines = res.split("\n");
- for(i=0 ; i < lines.length ; i++) {
- if ((c=lines[i].indexOf("cipher ")) >= 0) {
- String line = lines[i];
- String pass = line.substring(c+7);
- String rest;
- int s = pass.indexOf(" ");
- if (s >= 0) {
- rest = pass.substring(s);
- pass = pass.substring(0,s);
- } else {
- s = pass.indexOf("\r");
- if (s >= 0) {
- rest = pass.substring(s);
- pass = pass.substring(0,s);
- }
- else {
- rest = "";
- }
- }
- // find cipher string and quote it
- lines[i] = line.substring(0,c+7)+quote(pass)+rest;
- }
- }
-
- worker.showCliResponse(join(lines, "\n"));
-} else {
- worker.showCliResponse("");
-}
-```
-
-And similarly de-quoting when applying a configuration.
-
-```
-lines = data.split("\n");
-for (i=0 ; i < lines.length ; i++) {
- if ((c=lines[i].indexOf("cipher ")) >= 0) {
- String line = lines[i];
- String pass = line.substring(c+7);
- String rest;
- int s = pass.indexOf(" ");
- if (s >= 0) {
- rest = pass.substring(s);
- pass = pass.substring(0,s);
- } else {
- s = pass.indexOf("\r");
- if (s >= 0) {
- rest = pass.substring(s);
- pass = pass.substring(0,s);
- }
- else {
- rest = "";
- }
- }
- // find cipher string and quote it
- lines[i] = line.substring(0,c+7)+dequote(pass)+rest;
- }
-}
-```
-
-### **Applying a Config**
-
-NSO will send the configuration to the device in three different callbacks: `prepare()`, `abort()`, and `revert()`. The Java code should issue these commands to the device but some processing of the commands may be necessary. Also, the ongoing CLI session needs to enter configure mode, issue the commands, and then exit configure mode. Some processing may be needed if the device has different keywords, or different quoting, as described under the "Displaying the configuration of a device" section above.
-
-For example, if a device uses `undo` in place of `no` then the code may look like this, where `data` is the string of commands received from NSO:
-
-```
-lines = data.split("\n");
-for (i=0 ; i < lines.length ; i++) {
- if (lines[i].indexOf("no ") == 0) {
- lines[i] = lines[i].replace("no ", "undo ");
- }
-}
-```
-
-This relies on the fact that NSO will not have any indentation in the commands sent to the device (as opposed to the indentation usually present in the output from `show running-config`).
-
-## Tail-f CLI NED Annotations
-
-The typical Cisco CLI has two major modes, operational mode and configure mode. In addition, the configure mode has submodes. For example, interfaces are configured in a submode that is entered by giving the command `interface `. Exiting a submode, i.e. giving the **exit** command, leaves you in the parent mode. Submodes can also be embedded in other submodes.
-
-In a typical Cisco CLI, you do not necessary have to exit a submode to execute a command in a parent mode. In fact, the output of the command `show running-config` hardly contains any exit commands. Instead, there is an exclamation mark, `!`, to indicate that a submode is done, which is only a comment. The config is formatted to rely on the fact that if a command isn't found in the current submode, the CLI engine searches for the command in its parent mode.
-
-Another interesting mapping problem is how to interpret the **no** command when multiple leaves are given on a command line. Consider the model:
-
-```yang
-container foo {
- tailf:cli-compact-syntax;
- tailf:cli-sequence-commands;
- presence true;
- leaf a {
- type string;
- }
- leaf b {
- type string;
- }
- leaf c {
- type string;
- }
-}
-```
-
-It corresponds to the command syntax `foo [a [b [c ]]]`, i.e. the following commands are valid:
-
-```
-foo
-foo a
-foo a b
-foo a b c
-```
-
-Now what does it mean to write `no foo a b c `? . It could mean that only the `c` leaf should be removed, or it could mean that all leaves should be removed, and it may also mean that the `foo` container should be removed.
-
-There is no clear principle here and no one right solution. The annotations are therefore necessary to help the diff engine figure out what to actually send to the device.
-
-## Annotations
-
-The full set of annotations can be found in the `tailf_yang_cli_extensions` Manual Page. All annotation YANG extensions are not applicable in an NSO context, but most are. The most commonly used annotations are (in alphabetical order):
-
-
-
-tailf:cli-add-mode
-
-Used for adding a submode in a container. The default rendering engine maps a container as a command prefix and a list node as a submode. However, sometimes entering a submode does not require the user to give a specific instance. In these cases, you can use the `tailf:cli-add-mode` on a container:
-
-```yang
-container system {
- tailf:info "For system events.";
- container "default" {
- tailf:cli-add-mode;
- tailf:cli-mode-name "cfg-acct-mlist";
- tailf:cli-delete-when-empty;
- presence true;
- container start-stop {
- tailf:info "Record start and stop without waiting";
- leaf group {
- tailf:info "Use Server-group";
- type aaa-group-type;
- }
- }
- }
-}
-```
-
-In this example, the `tailf:cli-add-mode` annotations tell the CLI engine to render the `default` container as a submode, in other words, there will be a command `system default` for entering the default container as a submode. All further commands will use that context as a base. In the example above, the `default` container will only contain one command `start-stop group`, rendered from the `start-stop` container (rendered as a prefix) and the `group` leaf.
-
-
-
-
-
-tailf:cli-allow-join-with-key
-
-Tells the parser that the list name is allowed to be joined together with the first key, i.e. written without space in between. This is used to render, for example, the `interface FastEthernet` command where the list is `FastEthernet` and the key is the interface name. In a typical Cisco CLI they are allowed to be written both as **i**`nterface FastEthernet 1` and as `interface FastEthernet1`.
-
-```yang
-list FastEthernet {
- tailf:info "FastEthernet IEEE 802.3";
- tailf:cli-allow-join-with-key {
- tailf:cli-display-joined;
- }
- tailf:cli-mode-name "config-if";
- key name;
- leaf name {
- type string {
- pattern "[0-9]+.*";
- tailf:info "<0-66>/<0-128>;;FastEthernet interface number";
- }
-}
-```
-
-In the above example, the `tailf:cli-display-joined` substatement is used to tell the command renderer that it should display a list item using the format without space.
-
-
-
-
-
-tailf:cli-allow-join-with-value
-
-This tells the parser that a leaf value is allowed to be written without space between the leaf name and the value. This is typically the case when referring to an interface. For example:
-
-```yang
-leaf FastEthernet {
- tailf:info "FastEthernet IEEE 802.3";
- tailf:cli-allow-join-with-value {
- tailf:cli-display-joined;
- }
- type string;
- tailf:non-strict-leafref {
- path "/ios:interface/ios:FastEthernet/ios:name";
- }
-}
-```
-
-In the example above, a leaf FastEthernet is used to point to an existing interface. The command is allowed to be written both as `FastEthernet 1` and as `FastEthernet1`, when referring to FastEthernet interface 1. The substatements say which is the preferred format when rendering the command.
-
-
-
-
-
-tailf:cli-prefix-key and tailf:cli-before-key
-
-Normally, keys come before other leaves when a list command is used, and this is required in YANG. However, this is not always the case in Cisco-style CLIs. For example the `route-map` command where the name and sequence numbers are the keys, but the leaf operation (permit or deny) is given in between the first and the second key. The `tailf:cli-prefix-key` annotation tells the parser to expect a given leaf before the keys, but the substatement `tailf:cli-before-key ` can be used to specify that the leaf should occur in between two keys. For example:
-
-```yang
-list route-map {
- tailf:info "Route map tag";
- tailf:cli-mode-name "config-route-map";
- tailf:cli-compact-syntax;
- tailf:cli-full-command;
- key "name sequence";
- leaf name {
- type string {
- tailf:info "WORD;;Route map tag";
- }
- }
- // route-map * #
- leaf sequence {
- tailf:cli-drop-node-name;
- type uint16 {
- tailf:info "<0-65535>;;Sequence to insert to/delete from "
- +"existing route-map entry";
- range "0..65535";
- }
- }
- // route-map * permit
- // route-map * deny
- leaf operation {
- tailf:cli-drop-node-name;
- tailf:cli-prefix-key {
- tailf:cli-before-key 2;
- }
- type enumeration {
- enum deny {
- tailf:code-name "op_deny";
- tailf:info "Route map denies set operations";
- }
- enum permit {
- tailf:code-name "op_internet";
- tailf:info "Route map permits set operations";
- }
- }
- default permit;
- }
-}
-```
-
-A lot of things are going on in the example above, in addition to the `tailf:cli-prefix-key` and `tailf:cli-before-key` annotations. The `tailf:cli-drop-node-name` annotation tells the parser to ignore the name of the leaf (to not accept that as input, or render it when displaying the configuration).
-
-
-
-
-
-tailf:cli-boolean-no
-
-This tells the parser to render a leaf of type boolean as `no ` and `` instead of the default ` false` and ` true`. The other alternative to this is to use a leaf of type empty and the `tailf:cli-show-no` annotation. The difference is subtle. A leaf with `tailf:cli-boolean-no` would not be displayed unless explicitly configured to either true or false, whereas a type empty leaf with `tailf:cli-show-no` would always be displayed if not set. For example:
-
-```yang
-leaf keepalive {
- tailf:info "Enable keepalive";
- tailf:cli-boolean-no;
- type boolean;
-}
-```
-
-In the above example the `keepalive` leaf is set to true when the command `keepalive` is given, and to false when `no keepalive` is given. The well known `shutdown` command, on the other hand, is modeled as a type empty leaf with the `tailf:cli-show-no` annotation:
-
-```yang
-leaf shutdown {
- // Note: default to "no shutdown" in order to be able to bring if up.
- tailf:info "Shutdown the selected interface";
- tailf:cli-full-command;
- tailf:cli-show-no;
- type empty;
-}
-```
-
-
-
-
-
-tailf:cli-sequence-commands and tailf:cli-break-sequence-commands
-
-These annotations are used to tell the CLI to only accept leaves in a container in the same order as they appears in the data model. This is typically required when the leaf names are hidden using the `tailf:cli-drop-node-name` annotation. It is very common in the Cisco CLI that commands accept multiple parameters, and such commands must be mapped to setting of multiple leaves in the data model. For example the `aggregate-address` command in the `router bgp` submode:
-
-```
-// router bgp * / aggregate-address
-container aggregate-address {
- tailf:info "Configure BGP aggregate entries";
- tailf:cli-compact-syntax;
- tailf:cli-sequence-commands {
- tailf:cli-reset-all-siblings;
- }
- leaf address {
- tailf:cli-drop-node-name;
- type inet:ipv4-address {
- tailf:info "A.B.C.D;;Aggregate address";
- }
- }
- leaf mask {
- tailf:cli-drop-node-name;
- type inet:ipv4-address {
- tailf:info "A.B.C.D;;Aggregate mask";
- }
- }
- leaf advertise-map {
- tailf:cli-break-sequence-commands;
- tailf:info "Set condition to advertise attribute";
- type string {
- tailf:info "WORD;;Route map to control attribute "
- +"advertisement";
- }
- }
- leaf as-set {
- tailf:info "Generate AS set path information";
- type empty;
- }
- leaf attribute-map {
- type string {
- tailf:info "WORD;;Route map for parameter control";
- }
- }
- leaf as-override {
- tailf:info "Override matching AS-number while sending update";
- type empty;
- }
- leaf route-map {
- type string {
- tailf:info "WORD;;Route map for parameter control";
- }
- }
- leaf summary-only {
- tailf:info "Filter more specific routes from updates";
- type empty;
- }
- leaf suppress-map {
- tailf:info "Conditionally filter more specific routes from "
- +"updates";
- type string {
- tailf:info "WORD;;Route map for suppression";
- }
- }
-}
-```
-
-In the above example, the `tailf:cli-sequence-commands` annotation tells the parser to require the leaves in the `aggregate-address` container to be entered in the same order as in the data model, i.e. first address then mask. Since these leaves also have the `tailf:cli-drop-node-name` annotation, it would be impossible for the parser to know which leaf to map the values to, unless the order of appearance was used. The `tailf:cli-break-sequence-commands` annotation on the advertise-map leaf tells the parser that from that leaf and onward the ordering is no longer important and the leaves can be entered in any order (and leaves can be skipped).
-
-Two other annotations are often used in combination with `tailf:cli-sequence-commands`; `tailf:cli-reset-all-siblings`, and `tailf:cli-compact-syntax`. The first tells the parser that all leaves should be reset when any leaf is entered, i.e. if the user first gives the command:
-
-```
-aggregate-address 1.1.1.1 255.255.255.0 as-set summary-only
-```
-
-This would result in the leaves address, mask, as-set, and summary-only being set in the configuration. However, if the user then entered:
-
-```
-aggregate-address 1.1.1.1 255.255.255.0 as-set
-```
-
-The assumed result of this is that summary-only is no longer configured, ie that all leaves in the container is zeroed out when the command is entered again. The `tailf:cli-compact-syntax` annotation tells the CLI engine to render all leaves in the rendered on a separate line.
-
-```
-aggregate-address 1.1.1.1
-aggregate-address 255.255.255.0
-aggregate-address as-set
-aggregate-address summary-only
-```
-
-The above will be rendered on one line (compact syntax) as:
-
-```
-aggregate-address 1.1.1.1 255.255.255.0 as-set summary-only
-```
-
-
-
-
-
-tailf:cli-case-insensitive
-
-Tells the parser that this particular leaf should be allowed to be entered in case insensitive format. The reason this is needed is that some devices display a command in one case, and other display the same command in a different case. Normally command parsing is case-sensitive. For example:
-
-```yang
-leaf dhcp {
- tailf:info "Default Gateway obtained from DHCP";
- tailf:cli-case-insensitive;
- type empty;
-}
-```
-
-
-
-
-
-tailf:cli-compact-syntax
-
-This annotation tells the CLI engine to render all leaves in the container on one command line, i.e. instead of the default rendering where each leaf is rendered on a separate line
-
-```
-aggregate-address 1.1.1.1
-aggregate-address 255.255.255.0
-aggregate-address as-set
-aggregate-address summary-only
-```
-
-It should be rendered on one line (compact syntax) as
-
-```
-aggregate-address 1.1.1.1 255.255.255.0 as-set summary-only
-```
-
-
-
-
-
-tailf:cli-delete-container-on-delete
-
-Deleting items in the database is tricky when using the Cisco CLI syntax. The reason is that `no ` is open to multiple interpretations in many cases, for example when multiple leaves are set in one command, or a presence container is set in addition to a leaf. For example:
-
-```yang
-container dampening {
- tailf:info "Enable event dampening";
- presence "true";
- leaf dampening-time {
- tailf:cli-drop-node-name;
- tailf:cli-delete-container-on-delete;
- tailf:info "<1-30>;;Half-life time for penalty";
- type uint16 {
- range 1..30;
- }
- }
-}
-```
-
-This data model allows both the `dampening` command and the command `dampening 10`. When the command `no dampening 10` is issued, should both the dampening container and the leaf be removed, or only the leaf? The `tailf:cli-delete-container-on-delete` tells the CLI engine to also delete the container when the leaf is removed.
-
-
-
-
-
-tailf:cli-delete-when-empty
-
-This annotation tells the CLI engine to remove a list entry or a presence container when all content of the container or list instance has been removed. For example:
-
-```yang
-container access-class {
- tailf:info "Filter connections based on an IP access list";
- tailf:cli-compact-syntax;
- tailf:cli-sequence-commands;
- tailf:cli-reset-container;
- tailf:cli-flatten-container;
- list access-list {
- tailf:cli-drop-node-name;
- tailf:cli-compact-syntax;
- tailf:cli-reset-container;
- tailf:cli-suppress-mode;
- tailf:cli-delete-when-empty;
- key direction;
- leaf direction {
- type enumeration {
- enum "in" {
- tailf:info "Filter incoming connections";
- }
- enum "out" {
- tailf:info "Filter outgoing connections";
- }
- }
- }
- leaf access-list {
- tailf:cli-drop-node-name;
- tailf:cli-prefix-key;
- type exp-ip-acl-type;
- mandatory true;
- }
- leaf vrf-also {
- tailf:info "Same access list is applied for all VRFs";
- type empty;
- }
- }
-}
-```
-
-In this case, the `tailf:cli-delete-when-empty` annotation tells the CLI engine to remove an access-list instance when it doesn't have neither an access-list nor a `vrf-also` child.
-
-
-
-
-
-tailf:cli-diff-dependency
-
-This annotation tells the CLI engine that there is a dependency between the current account when generating diff commands to send to the device, or when rendering the `show configuration` command output. It can have two different substatements: `tailf:cli-trigger-on-set` and `tailf:cli-trigger-on-all`.
-
-Without substatements, it should be thought of as similar to a leaf-ref, i.e. if the dependency target is delete, first perform any modifications to this leaf. For example, the redistribute `ospf` submode in `router bgp`:
-
-```
-// router bgp * / redistribute ospf *
-list ospf {
- tailf:info "Open Shortest Path First (OSPF)";
- tailf:cli-suppress-mode;
- tailf:cli-delete-when-empty;
- tailf:cli-compact-syntax;
- key id;
- leaf id {
- type uint16 {
- tailf:info "<1-65535>;;Process ID";
- range "1..65535";
- }
- }
- list vrf {
- tailf:info "VPN Routing/Forwarding Instance";
- tailf:cli-suppress-mode;
- tailf:cli-delete-when-empty;
- tailf:cli-compact-syntax;
- tailf:cli-diff-dependency "/ios:ip/ios:vrf";
- tailf:cli-diff-dependency "/ios:vrf/ios:definition";
- key name;
- leaf name {
- type string {
- tailf:info "WORD;;VPN Routing/Forwarding Instance (VRF) name";
- }
- }
- }
-}
-```
-
-The `tailf:cli-diff-dependency "/ios:ip/ios:vrf"` tells the engine that if the `ip vrf` part of the configuration is deleted, then first display any changes to this part. This can be used when the device requires a certain ordering of the commands.
-
-If the `tailf:cli-trigger-on-all` substatement is used, then it means that the target will always be displayed before the current node. Normally the order in the YANG file is used, but and it might not even be possible if they are embedded in a container.
-
-The `tailf:cli-trigger-on-set` tells the engine that the ordering should be taken into account when this leaf is set and some other leaf is deleted. The other leaf should then be deleted before this is set. Suppose you have this data model:
-
-```yang
-list b {
- key "id";
- leaf id {
- type string;
- }
- leaf name {
- type string;
- }
- leaf y {
- type string;
- }
-}
-list a {
- key id;
- leaf id {
- tailf:cli-diff-dependency "/c[id=current()/../id]" {
- tailf:cli-trigger-on-set;
- }
- tailf:cli-diff-dependency "/b[id=current()/../id]";
- type string;
- }
-}
-list c {
- key id;
- leaf id {
- tailf:cli-diff-dependency "/a[id=current()/../id]" {
- tailf:cli-trigger-on-set;
- }
- tailf:cli-diff-dependency "/b[id=current()/../id]";
- type string;
- }
-}
-```
-
-Then the `tailf:cli-diff-dependency "/b[id=current()/../id]"` tells the CLI that before `b` list instance is delete, the `c` instance with the same name needs to be changed.
-
-```
-tailf:cli-diff-dependency "/a[id=current()/../id]" {
- tailf:cli-trigger-on-set;
-}
-```
-
-This annotation, on the other hand, says that before this instance is created any changes to the a instance with the same name needs to be displayed.
-
-Suppose you have the configuration:
-
-```
-b foo
-!
-a foo
-!
-```
-
-Then created `c foo` and deleted `a foo`, it should be displayed as:
-
-```
-no a foo
-c foo
-```
-
-If you then deleted **c foo** and created **a foo**, it should be rendered as:
-
-```
-no c foo
-a foo
-```
-
-That is, in the reverse order.
-
-
-
-
-
-tailf:cli-disallow-value
-
-This annotation is used to disambiguate parsing. This is sometimes necessary when `tailf:cli-drop-node-name` is used. For example:
-
-```yang
-container authentication {
- tailf:info "Authentication";
- choice auth {
- leaf word {
- tailf:cli-drop-node-name;
- tailf:cli-disallow-value "md5|text";
- type string {
- tailf:info "WORD;;Plain text authentication string "
- +"(8 chars max)";
- }
- }
- container md5 {
- tailf:info "Use MD5 authentication";
- leaf key-chain {
- tailf:info "Set key chain";
- type string {
- tailf:info "WORD;;Name of key-chain";
- }
- }
- }
- }
-}
-```
-
-when the command `authentication md5...` is entered the CLI parser cannot determine if the leaf **word** should be set to the value `"md5"` of if the leaf `md5` should be set. By adding the `tailf:cli-disallow-value` annotation you can tell the CLI parser that certain regular expressions are not valid values. An alternative would be to add a restriction to the string type of **word** but this is much more difficult since restrictions can only be used to specify allowed values, not disallowed values.
-
-
-
-
-
-tailf:cli-display-joined
-
-See the description of `tailf:cli-allow-join-with-value` and `tailf:cli-allow-join-with-key`.
-
-
-
-
-
-tailf:cli-display-separated
-
-This annotation can be used on a presence container and tells the CLI engine that the container should be displayed as a separate command, even when a leaf in the container is set. The default rendering does not do this. For example:
-
-```yang
-container ntp {
- tailf:info "Configure NTP";
- // interface * / ntp broadcast
- container broadcast {
- tailf:info "Configure NTP broadcast service";
- //tailf:cli-display-separated;
- presence true;
- container client {
- tailf:info "Listen to NTP broadcasts";
- tailf:cli-full-command;
- presence true;
- }
- }
-}
-```
-
-If both `broadcast` and `client` are created in the configuration then this will be displayed as:
-
-```
-ntp broadcast
-ntp broadcast client
-```
-
-When the `tailf:cli-display-separated` annotation is used. If the annotation isn't present then it would only be displayed as:
-
-```
-ntp broadcast client
-```
-
-The creation of the broadcast container would be implied.
-
-
-
-
-
-tailf:cli-drop-node-name
-
-This might be the most used annotation of them all. It can be used for multiple purposes. Primarily it tells the CLI engine that the node name should be ignored, which is typically needed when there is no corresponding leaf name in the command, typically when a command requires multiple parameters:
-
-```yang
-container exec-timeout {
- tailf:info "Set the EXEC timeout";
- tailf:cli-sequence-commands;
- tailf:cli-compact-syntax;
- leaf minutes {
- tailf:info "<0-35791>;;Timeout in minutes";
- tailf:cli-drop-node-name;
- type uint32;
- }
- leaf seconds {
- tailf:info "<0-2147483>;;Timeout in seconds";
- tailf:cli-drop-node-name;
- type uint32;
- }
-}
-```
-
-However, it can also be used to introduce ambiguity, or a choice in the parse tree if you like. Suppose you need to support these commands:
-
-```
-// interface * / vrf forwarding
-// interface * / ip vrf forwarding
-choice vrf-choice {
- container ip-vrf {
- tailf:cli-no-keyword;
- tailf:cli-drop-node-name;
- container ip {
- container vrf {
- leaf forwarding {
- tailf:info "Configure forwarding table";
- type string {
- tailf:info "WORD;;VRF name";
- }
- tailf:non-strict-leafref {
- path "/ios:ip/ios:vrf/ios:name";
- }
- }
- }
- }
-}
-container vrf {
- tailf:info "VPN Routing/Forwarding parameters on the interface";
- // interface * / vrf forwarding
- leaf forwarding {
- tailf:info "Configure forwarding table";
- type string {
- tailf:info "WORD;;VRF name";
- }
- tailf:non-strict-leafref {
- path "/ios:vrf/ios:definition/ios:name";
- }
- }
-}
-
-// interface * / ip
-container ip {
- tailf:info "Interface Internet Protocol config commands";
-}
-```
-
-In the above case, when the parser sees the beginning of the command `ip`, it can interpret it as either entering the `interface */vrf-choice/ip-vrf/ip/vrf` config tree, or the `interface */ip` tree since the tokens consumed are the same in both branches. When the parser sees a `tailf:cli-drop-node-name` in the parse tree, it will try to match the current token stream to that parse tree, and if that fails backtrack and try other paths.
-
-
-
-
-
-tailf:cli-exit-command
-
-Tells the CLI engine to add an explicit exit command in the current submode. Normally, a submode does not have exit commands for leaving a submode, instead, it is implied by the following command residing in a parent mode. However, to avoid ambiguity it is sometimes necessary. For example, in the `address-family` submode:
-
-```yang
-container address-family {
- tailf:info "Enter Address Family command mode";
- container ipv6 {
- tailf:info "Address family";
- container unicast {
- tailf:cli-add-mode;
- tailf:cli-mode-name "config-router-af";
- tailf:info "Address Family Modifier";
- tailf:cli-full-command;
- tailf:cli-exit-command "exit-address-family" {
- tailf:info "Exit from Address Family configuration "
- +"mode";
- }
- }
- }
-}
-```
-
-
-
-
-
-tailf:cli-explicit-exit
-
-This tells the CLI engine to render explicit exit commands instead of the default `!` when leaving a submode. The annotation is inherited by all submodes. For example:
-
-```yang
-container interface {
- tailf:info "Configure interfaces";
- tailf:cli-diff-dependency "/ios:vrf";
- tailf:cli-explicit-exit;
- // interface Loopback
- list Loopback {
- tailf:info "Loopback interface";
- tailf:cli-allow-join-with-key {
- tailf:cli-display-joined;
- }
- tailf:cli-mode-name "config-if";
- tailf:cli-suppress-key-abbreviation;
- // tailf:cli-full-command;
- key name;
- leaf name {
- type string {
- pattern "([0-9\.])+";
- tailf:info "<0-2147483647>;;Loopback interface number";
- }
- }
- uses interface-common-grouping;
- }
-}
-```
-
-Without the `tailf:cli-explicit-exit` annotation, the edit sequences sent to the NED device will contain `!` at the end of a mode, and rely on the next command to move from one submode to some other place in the CLI. This is the way the Cisco CLI usually works. However, it may cause problems if the next edit command is also a valid command in the current submode. Using `tailf:cli-explicit-exit` gets around this problem.
-
-
-
-
-
-tailf:cli-expose-key-name
-
-By default, the key leaf names are not shown in the CLI, but sometimes you want them to be visible, for example:
-
-```
-// ip explicit-path name *
-list explicit-path {
- tailf:info "Configure explicit-path";
- tailf:cli-mode-name "cfg-ip-expl-path";
- key name;
- leaf name {
- tailf:info "Specify explicit path by name";
- tailf:cli-expose-key-name;
- type string {
- tailf:info "WORD;;Enter name";
- }
- }
-}
-```
-
-
-
-
-
-tailf:cli-flat-list-syntax
-
-By default, a leaf-list is rendered as a single line with the elements enclosed by `[` and `]`. If you want the values to be listed on one line this is the annotation to use. For example:
-
-```
-// class-map * / match cos
-leaf-list cos {
- tailf:info "IEEE 802.1Q/ISL class of service/user priority values";
- tailf:cli-flat-list-syntax;
- type uint16 {
- range "0..7";
- tailf:info "<0-7>;;Enter up to 4 class-of-service values"+
- " separated by white-spaces";
- }
-}
-```
-
-
-
-
-
-tailf:cli-flatten-container
-
-This annotation is a bit tricky. It tells the CLI engine that the container should be allowed to co-exist with leaves on the same command line, i.e. flattened. Normally, once the parser has entered a container it will not exit. However, if the container is flattened, the container will be exited once all leaves in the container have been entered. Also, a flattened container will be displayed together with sibling leaves on the same command line (provided the surrounding container has `tailf:cli-compact-syntax`).
-
-Suppose you want to model the command `limit [inbound ] [outbound ] mtu `. In other words, the inbound and outbound settings are optional, but if you give inbound you have to specify two 16-bit integers, and you can always specify mtu.
-
-```yang
-container foo {
- tailf:cli-compact-syntax;
- container inbound {
- tailf:cli-compact-syntax;
- tailf:cli-sequence-commands;
- tailf:cli-flatten-container;
- leaf a {
- tailf:cli-drop-node-name;
- type uint16;
- }
- leaf b {
- tailf:cli-drop-node-name;
- type uint16;
- }
- }
- container outbound {
- tailf:cli-compact-syntax;
- tailf:cli-sequence-commands;
- tailf:cli-flatten-container;
- leaf a {
- tailf:cli-drop-node-name;
- type uint16;
- }
- leaf b {
- tailf:cli-drop-node-name;
- type uint16;
- }
- }
- leaf mtu {
- type uint16;
- }
-}
-```
-
-In the above example the `tailf:cli-flatten-container` tells the parser that it should exit the outbound/inbound container once both values have been entered. Without the annotation, it would not be possible to exit the container once it has been entered. It would be possible to have the command `foo inbound 1 3` or `foo outbound 1 2` but not both at the same time, and not the final mtu leaf. The `tailf:cli-compact-syntax` annotation tells the renderer to display all leaves on the same line. If it wasn't used the line setting `foo inbound 1 2 outbound 3 4 mtu 1500` would be displayed as:
-
-```
-foo inbound 1
-foo inbound 2
-foo outbound 3
-foo outbound 4
-foo mtu 1500
-```
-
-The annotation `tailf:cli-sequence-commands` tells the CLI that the user has to enter the leaves inside the container in the specified order. Without this annotation, it would not be possible to drop the names of the leaves and still have a deterministic parser. With the annotation, the parser knows that for the command `foo inbound 1 2`, leaf a should be assigned the value 1 and leaf b the value 2.
-
-Another example:
-
-```yang
-container htest {
- tailf:cli-add-mode;
- container param {
- tailf:cli-hide-in-submode;
- tailf:cli-flatten-container;
- tailf:cli-compact-syntax;
- leaf a {
- type uint16;
- }
- leaf b {
- type uint16;
- }
- }
- leaf mtu {
- type uint16;
- }
-}
-```
-
-The above model results in the command `htest param a b ` for entering the submode. Once the submode has been entered, the command `mtu ` is available. Without the `tailf:cli-flatten-container` annotation it wouldn't be possible to use the `tailf:cli-hide-in-submode` annotation to attach the leaves to the command for entering the submode.
-
-
-
-
-
-tailf:cli-full-command
-
-This annotation tells the parser to not accept any more input beyond this element. By default, the parser will allow the setting of multiple leaves in the same command, and both enter a submode and set leaf values in the submode. In most cases, it doesn't matter that the parser accepts commands that are not actually generated by the device in the output of `show running-config`. It is however needed to avoid ambiguity, or just to make the NSO CLI for the device more user-friendly.
-
-```yang
-container transceiver {
- tailf:info "Select from transceiver configuration commands";
- container "type" {
- tailf:info "type keyword";
- // transceiver type all
- container all {
- tailf:cli-add-mode;
- tailf:cli-mode-name "config-xcvr-type";
- tailf:cli-full-command;
- // transceiver type all / monitoring
- container monitoring {
- tailf:info "Enable/disable monitoring";
- presence true;
- leaf interval {
- tailf:info "Set interval for monitoring";
- type uint16 {
- tailf:info "<300-3600>;;Time interval for monitoring "+
- "transceiver in seconds";
- range "300..3600";
- }
- }
- }
- }
- }
-}
-```
-
-In the above example, it is possible to have the command `transceiver type all` for entering a submode, and then give the command `monitor [interval <300-3600>]`. If the `tailf:cli-full-command` annotation had not been used, the following would also have been a valid command: `transceiver type all monitor [interval <300-3600>]`. In the above example, it doesn't make a difference as far as being able to parse the configuration on a device. The device will never show the oneline command syntax but always display it as two lines, one for entering the submode and one for setting the monitor interval.
-
-
-
-
-
-tailf:cli-full-no
-
-This annotation tells the CLI parser that no further arguments should be accepted for this path when the path is traversed as an argument to the **no** command.
-
-Example of use:
-
-```
-// event manager applet * / action * info
-container info {
- tailf:info "Obtain system specific information";
- // event manager applet * / action info type
- container "type" {
- tailf:info "Type of information to obtain";
- tailf:cli-full-no;
- container snmp {
- tailf:info "SNMP information";
- // event manager applet * / action info type snmp var
- container var {
- tailf:info "Trap variable";
- tailf:cli-compact-syntax;
- tailf:cli-sequence-commands;
- tailf:cli-reset-container;
- leaf variable-name {
- tailf:cli-drop-node-name;
- tailf:cli-incomplete-command;
- type string {
- tailf:info "WORD;;Trap variable name";
- }
- }
- }
- }
- }
-}
-```
-
-
-
-
-
-tailf:cli-hide-in-submode
-
-In some cases, you need to give some parameters for entering a submode, but the submode cannot be modeled as a list, or the parameters should not be modeled as a key element of the list but rather behaves as a leaf. In these cases, you model the parameter as a leaf and use the `tailf:cli-hide-in-submode` annotation. It has two purposes, the leaf is displayed as part of the command for entering the submode when rendering the config, and the leaf is not available as a command in the submode.
-
-For example:
-
-```
-// event manager applet *
-list applet {
- tailf:info "Register an Event Manager applet";
- tailf:cli-mode-name "config-applet";
- tailf:cli-exit-command "exit" {
- tailf:info "Exit from Event Manager applet configuration submode";
- }
- key name;
- leaf name {
- type string {
- tailf:info "WORD;;Name of the Event Manager applet";
- }
- }
- // event manager applet * authorization
- leaf authorization {
- tailf:info "Specify an authorization type for the applet";
- tailf:cli-hide-in-submode;
- type enumeration {
- enum bypass {
- tailf:info "EEM aaa authorization type bypass";
- }
- }
- }
- // event manager applet * class
- leaf class {
- tailf:info "Specify a class for the applet";
- tailf:cli-hide-in-submode;
- type string {
- tailf:info "Class A-Z | default - default class";
- pattern "[A-Z]|default";
- }
- }
- // event manager applet * trap
- leaf trap {
- tailf:info "Generate an SNMP trap when applet is triggered.";
- tailf:cli-hide-in-submode;
- type empty;
- }
-}
-```
-
-In the example above the key to the list is the **name** leaf, but to enter the submode the user may also give the arguments `event manager applet [authorization bypass] [class ] [trap]`. It is clear that these leaves are not keys to the list since giving the same name but different authorization, class, or trap argument does not result in a new applet instance.
-
-
-
-
-
-tailf:cli-incomplete-command
-
-Tells the CLI that it should not be possible to hit `cr` after the current element. This is usually the case when a command takes multiple parameters, for example, given the following data model:
-
-```yang
-container foo {
- tailf:cli-compact-syntax;
- tailf:cli-sequence-commands;
- presence true;
- leaf a {
- type string;
- }
- leaf b {
- type string;
- }
- leaf c {
- type string;
- }
-}
-```
-
-The valid commands are `foo [a [b [c ]]]`. If it however should be `foo a b [c ]`, i.e. the parameters `a` and `b` are mandatory, and `c` is optional, then the `tailf:cli-incomplete-command` annotation should be used as follows:
-
-```yang
-container foo {
- tailf:cli-compact-syntax;
- tailf:cli-sequence-commands;
- tailf:cli-incomplete-command;
- presence true;
- leaf a {
- tailf:cli-incomplete-command;
- type string;
- }
- leaf b {
- type string;
- }
- leaf c {
- type string;
- }
-}
-```
-
-In other words, the command is incomplete after entering just `foo`, and also after entering `foo a `, but not after `foo a b ` or `foo a b c `.
-
-
-
-
-
-tailf:cli-incomplete-no
-
-This annotation is similar to the `tailf:cli-incomplete-command` above, but applies to **no** commands. Sometimes you want to prevent the user from entering a generic **no** command. Suppose you have the data model:
-
-```yang
-container foo {
- tailf:cli-compact-syntax;
- tailf:cli-sequence-commands;
- tailf:cli-incomplete-command;
- presence true;
- leaf a {
- tailf:cli-incomplete-command;
- type string;
- }
- leaf b {
- type string;
- }
- leaf c {
- type string;
- }
-}
-```
-
-Then it would be valid to write any of the following:
-
-```
-no foo
-no foo a
-no foo a b
-no foo a b c
-```
-
-If you only want the last version of this to be a valid command, then you can use `tailf:cli-incomplete-no` to enforce this. For example:
-
-```yang
-container foo {
- tailf:cli-compact-syntax;
- tailf:cli-sequence-commands;
- tailf:cli-incomplete-command;
- tailf:cli-incomplete-no;
- presence true;
- leaf a {
- tailf:cli-incomplete-command;
- tailf:cli-incomplete-no;
- type string;
- }
- leaf b {
- tailf:cli-incomplete-no;
- type string;
- }
- leaf c {
- type string;
- }
-}
-```
-
-
-
-
-
-tailf:cli-list-syntax
-
-The default rendering of a leaf-list element is as a command taking a list of values enclosed in square brackets. Given the following element:
-
-```
-// class-map * / source-address
-container source-address {
- tailf:info "Source address";
- leaf-list mac {
- tailf:info "MAC address";
- type string {
- tailf:info "H.H.H;;MAC address";
- }
- }
-}
-```
-
-This would result in the command `source-address mac [ H.H.H... H.H.H ]`, instead of the desired `source-address mac H.H.H`. Given the configuration:
-
-```
-source-address {
- mac [ 1410.9fd8.8999 a110.9fd8.8999 bb10.9fd8.8999 ]
-}
-```
-
-It should be rendered as:
-
-```
-source-address mac 1410.9fd8.8999
-source-address mac a110.9fd8.8999
-source-address mac bb10.9fd8.8999
-```
-
-This is achieved by adding the `tailf:cli-list-syntax` annotation. For example:
-
-```
-// class-map * / source-address
-container source-address {
- tailf:info "Source address";
- leaf-list mac {
- tailf:info "MAC address";
- tailf:cli-list-syntax;
- type string {
- tailf:info "H.H.H;;MAC address";
- }
- }
-}
-```
-
-An alternative would be to model this as a list, i.e.:
-
-```
-// class-map * / source-address
-container source-address {
- tailf:info "Source address";
- list mac {
- tailf:info "MAC address";
- tailf:cli-suppress-mode;
- key address;
- leaf address {
- type string {
- tailf:info "H.H.H;;MAC address";
- }
- }
- }
-}
-```
-
-In many cases, this may be the better choice. Notice how the `tailf:cli-suppress-mode` annotation is used to prevent the list from being rendered as a submode.
-
-
-
-
-
-tailf:cli-mode-name
-
-This annotation is not really needed when writing a NED. It is used to tell the CLI which prompt to use when in the submode. Without specific instructions, the CLI will invent a prompt based on the name of the submode container/list and the list instance. If a specific prompt is desired this annotation can be used. For example:
-
-```yang
-container transceiver {
- tailf:info "Select from transceiver configuration commands";
- container "type" {
- tailf:info "type keyword";
- // transceiver type all
- container all {
- tailf:cli-add-mode;
- tailf:cli-mode-name "config-xcvr-type";
- tailf:cli-full-command;
- // transceiver type all / monitoring
- container monitoring {
- tailf:info "Enable/disable monitoring";
- presence true;
- leaf interval {
- tailf:info "Set interval for monitoring";
- type uint16 {
- tailf:info "<300-3600>;;Time interval for monitoring "+
- "transceiver in seconds";
- range "300..3600";
- }
- }
- }
- }
- }
-}
-```
-
-
-
-
-
-tailf:cli-multi-value
-
-This annotation is used to indicate that a leaf should accept multiple tokens, and concatenate them. By default, only a single token is accepted as value to a leaf. If spaces are required then the value needs to be quoted. If this isn't desired the `tailf:cli-multi-value` annotation can be used to tell the parser that a leaf should accept multiple tokens. A common example of this is the description command. It is modeled as:
-
-```
-// event manager applet * / description
-leaf "description" {
- tailf:info "Add or modify an applet description";
- tailf:cli-full-command;
- tailf:cli-multi-value;
- type string {
- tailf:info "LINE;;description";
- }
-}
-```
-
-In the above example, the description command will take all tokens to the end of the line, concatenate them with a space, and use that for leaf value. The `tailf:cli-full-command` annotation is used to tell the parser that no other command following this can be entered on the same command line. The parser would not be able to determine when the argument to this command ended and the next command commenced anyway.
-
-
-
-
-
-tailf:cli-multi-word-key and tailf:cli-max-words
-
-By default, all key values consist of a single parser token, i.e. a string without spaces, or a quoted string. If multiple tokens should be accepted for a single key element, without quotes, then the `tailf:cli-multi-word-key` annotation can be used. The sub-annotation `tailf:cli-max-words` can be used to tell the parser that at most a fixed number of words should be allowed for the key. For example:
-
-```yang
-container permit {
- tailf:info "Specify community to accept";
- presence "Specify community to accept";
- list permit-list {
- tailf:cli-suppress-mode;
- tailf:cli-delete-when-empty;
- tailf:cli-drop-node-name;
- key expr;
- leaf expr {
- tailf:cli-multi-word-key {
- tailf:cli-max-words 10;
- }
- type string {
- tailf:info "LINE;;An ordered list as a regular-expression";
- }
- }
- }
-}
-```
-
-The `tailf:cli-max-words` annotation can be used to allow more things to be entered on the same command line.
-
-
-
-
-
-tailf:cli-no-name-on-delete and tailf:cli-no-value-on-delete
-
-When generating delete commands towards the device, the default behavior is to simply add `no` in front of the line you are trying to remove. However, this is not always allowed. In some cases, only parts of the command are allowed. For example, suppose you have the data model:
-
-```yang
-container ospf {
- tailf:info "OSPF routes Administrative distance";
- leaf external {
- tailf:info "External routes";
- type uint32 {
- range "1.. 255";
- tailf:info "<1-255>;;Distance for external routes";
- }
- tailf:cli-suppress-no;
- tailf:cli-no-value-on-delete;
- tailf:cli-no-name-on-delete;
- }
- leaf inter-area {
- tailf:info "Inter-area routes";
- type uint32 {
- range "1.. 255";
- tailf:info "<1-255>;;Distance for inter-area routes";
- }
- tailf:cli-suppress-no;
- tailf:cli-no-name-on-delete;
- tailf:cli-no-value-on-delete;
- }
- leaf intra-area {
- tailf:info "Intra-area routes";
- type uint32 {
- range "1.. 255";
- tailf:info "<1-255>;;Distance for intra-area routes";
- }
- tailf:cli-suppress-no;
- tailf:cli-no-name-on-delete;
- tailf:cli-no-value-on-delete;
- }
-}
-```
-
-If the old configuration has the configuration `ospf external 3 inter-area 4 intra-area 1` then the default behavior would be to send `no ospf external 3 inter-area 4 intra-area 1` but this would generate an error. Instead, the device simply wants `no ospf`. This is then achieved by adding `tailf:cli-no-name-on-delete` (telling the CLI engine to remove the element name from the no line), and `tailf:cli-no-value-on-delete` (telling the CLI engine to strip the leaf value from the command line to be sent).
-
-
-
-
-
-tailf:cli-optional-in-sequence
-
-This annotation is used in combination with `tailf:cli-sequence-commands`. It tells the parser that a leaf in the sequence isn't mandatory. Suppose you have the data model:
-
-```yang
-container foo {
- tailf:cli-compact-syntax;
- tailf:cli-sequence-commands;
- presence true;
- leaf a {
- tailf:cli-incomplete-command;
- type string;
- }
- leaf b {
- tailf:cli-incomplete-command;
- type string;
- }
- leaf c {
- type string;
- }
-}
-```
-
-If you want the command to behave as `foo a [b ] c `, it means that the leaves `a` and `c` are required and `b` is optional. If `b` is to be entered, it must be entered after `a` and before `c`. This would be achieved by adding `tailf:cli-optional-in-sequence` in `b`.
-
-```yang
-container foo {
- tailf:cli-compact-syntax;
- tailf:cli-sequence-commands;
- presence true;
- leaf a {
- tailf:cli-incomplete-command;
- type string;
- }
- leaf b {
- tailf:cli-incomplete-command;
- tailf:cli-optional-in-sequence;
- type string;
- }
- leaf c {
- type string;
- }
-}
-```
-
-A live example of this from the Cisco-ios data model is:
-
-```
-// voice translation-rule * / rule *
-list rule {
- tailf:info "Translation rule";
- tailf:cli-suppress-mode;
- tailf:cli-delete-when-empty;
- tailf:cli-incomplete-command;
- tailf:cli-compact-syntax;
- tailf:cli-sequence-commands {
- tailf:cli-reset-all-siblings;
- }
- ordered-by "user";
- key tag;
- leaf tag {
- type uint8 {
- tailf:info "<1-15>;;Translation rule tag";
- range "1..15";
- }
- }
- leaf reject {
- tailf:info "Call block rule";
- tailf:cli-optional-in-sequence;
- type empty;
- }
- leaf "pattern" {
- tailf:cli-drop-node-name;
- tailf:cli-full-command;
- tailf:cli-multi-value;
- type string {
- tailf:info "WORD;;Matching pattern";
- }
- }
-}
-```
-
-
-
-
-
-tailf:cli-prefix-key
-
-This annotation is used when the key element of a list isn't the first value that you give when setting a list element (for example when entering a submode). This is similar to `tailf:cli-hide-in-submode`, except it allows the leaf values to be entered in between key elements. In the example below the match leaf is entered before giving the filter ID.
-
-```yang
-container radius {
- tailf:info "RADIUS server configuration command";
- // radius filter *
- list filter {
- tailf:info "Packet filter configuration";
- key id;
- leaf id {
- type string {
- tailf:info "WORD;;Name of the filter (max 31 characters, longer will "
- +"be rejected";
- }
- }
- leaf match {
- tailf:cli-drop-node-name;
- tailf:cli-prefix-key;
- type enumeration {
- enum match-all {
- tailf:info "Filter if all of the attributes matches";
- }
- enum match-any {
- tailf:info "Filter if any of the attributes matches";
- }
- }
- }
-}
-```
-
-It is also possible to have a sub-annotation to `tailf:cli-prefix-key` that specifies that the leaf should occur before a certain key position. For example:
-
-```yang
-list route-map {
- tailf:info "Route map tag";
- tailf:cli-mode-name "config-route-map";
- tailf:cli-compact-syntax;
- tailf:cli-full-command;
- key "name sequence";
- leaf name {
- type string {
- tailf:info "WORD;;Route map tag";
- }
- }
- // route-map * #
- leaf sequence {
- tailf:cli-drop-node-name;
- type uint16 {
- tailf:info "<0-65535>;;Sequence to insert to/delete from "
- +"existing route-map entry";
- range "0..65535";
- }
- }
- // route-map * permit
- // route-map * deny
- leaf operation {
- tailf:cli-drop-node-name;
- tailf:cli-prefix-key {
- tailf:cli-before-key 2;
- }
- type enumeration {
- enum deny {
- tailf:code-name "op_deny";
- tailf:info "Route map denies set operations";
- }
- enum permit {
- tailf:code-name "op_internet";
- tailf:info "Route map permits set operations";
- }
- }
- default permit;
- }
- // route-map * / description
- leaf "description" {
- tailf:info "Route-map comment";
- tailf:cli-multi-value;
- type string {
- tailf:info "LINE;;Comment up to 100 characters";
- length "0..100";
- }
- }
-}
-```
-
-The keys for this list are `name` and `sequence`, but in between you need to specify `deny` or `permit`. This is not a key since you cannot have two different list instances with the same name and sequence number, but differ in `deny` and `permit`.
-
-
-
-
-
-tailf:cli-range-list-syntax
-
-This annotation is used to group together list instances, or values in a leaf-list into ranges. The type of the value is not restricted to integers only. It works with a string also, and it is possible to have a value like this: 1-5, t1, t2.
-
-```
-// spanning-tree vlans-root
-container vlans-root {
- tailf:cli-drop-node-name;
- list vlan {
- tailf:info "VLAN Switch Spanning Tree";
- tailf:cli-range-list-syntax;
- tailf:cli-suppress-mode;
- tailf:cli-delete-when-empty;
- key id;
- leaf id {
- type uint16 {
- tailf:info "WORD;;vlan range, example: 1,3-5,7,9-11";
- range "1..4096";
- }
- }
- }
-}
-```
-
-What will exist in the database is separate instances, i.e. if the configuration is `vlan 1,3-5,7,9-11` this will result in the database having the instances 1,3,4,5,7,9,10, and 11. Similarly, to create these instances on the device, the command generated by NSO will be `vlan 1,3-5,7,9-11`. Without this annotation, NSO would generate unique commands for each instance, i.e.:
-
-```
-vlan 1
-vlan 2
-vlan 3
-vlan 5
-vlan 7
-...
-```
-
-Same thing for leaf-lists:
-
-```
-leaf-list vlan {
- tailf:info "Range of vlans to add to the instance mapping";
- tailf:cli-range-list-syntax;
- type uint16 {
- tailf:info "LINE;;vlan range ex: 1-65, 72, 300 -200";
- }
-}
-```
-
-
-
-
-
-tailf:cli-remove-before-change
-
-Some settings need to be unset before they can be set. This can be accommodated by using the `tailf:cli-remove-before-change` annotation. An example of such a leaf is:
-
-```
-// ip vrf * / rd
-leaf rd {
- tailf:info "Specify Route Distinguisher";
- tailf:cli-full-command;
- tailf:cli-remove-before-change;
- type rd-type;
-}
-```
-
-You are not allowed to define a new route distinguisher before removing the old one.
-
-
-
-
-
-tailf:cli-replace-all
-
-This annotation is used on leaf-lists to tell the CLI engine that the entire list should be written and not just the additions or subtractions, which is the default behavior for leaf-lists. For example:
-
-```
-// controller * / channel-group
-list channel-group {
- tailf:info "Specify the timeslots to channel-group "+
- "mapping for an interface";
- tailf:cli-suppress-mode;
- tailf:cli-delete-when-empty;
- key number;
- leaf number {
- type uint8 {
- range "0..30";
- }
- }
- leaf-list timeslots {
- tailf:cli-replace-all;
- tailf:cli-range-list-syntax;
- type uint16;
- }
-}
-```
-
-The `timeslots` leaf is changed by writing the entire range value. The default would be to generate commands for adding and deleting values from the range.
-
-
-
-
-
-tailf:cli-reset-siblings and tailf:cli-reset-all-siblings
-
-This annotation is a sub-annotation to `tailf:cli-sequence-commands`. The problem it addresses is what should happen when a command that takes multiple parameters is run a second time. Consider the data model:
-
-```yang
-container foo {
- tailf:cli-compact-syntax;
- tailf:cli-sequence-commands {
- tailf:cli-reset-siblings;
- }
- presence true;
- leaf a {
- type string;
- }
- leaf b {
- type string;
- }
- leaf c {
- type string;
- }
-}
-```
-
-You are allowed to enter any of the below commands:
-
-```
-foo
-foo a
-foo a b
-foo a b c
-```
-
-If you first enter the command `foo a 1 b 2 c 3`, what will be stored in the database is foo being present, the leaf `a` having the value 1, the leaf b having the value 2, and the leaf `c` having the value 3.
-
-Now, if the command `foo a 3` is executed, it will set the value of leaf `a` to 3, but will leave leaf `b` and `c` as they were before. This is probably not the way the device works. In most cases, it expects the leaves `b` and `c` to be unset. The annotation `tailf:cli-reset-siblings` tells the CLI engine that all siblings covered by the `tailf:cli-sequence-commands` should be reset.
-
-Another similar case is when you have some leaves covered by the command sequencing, and some not. For example:
-
-```yang
-container foo {
- tailf:cli-compact-syntax;
- tailf:cli-sequence-commands {
- tailf:cli-reset-all-siblings;
- }
- presence true;
- leaf a {
- type string;
- }
- leaf b {
- tailf:cli-break-sequence-commands;
- type string;
- }
- leaf c {
- type string;
- }
-}
-```
-
-The above model will allow the user to enter the b and c leaves in any order, as long as leaf a is entered first. The annotation `tailf:cli-reset-siblings` will reset the leaves up to the `tailf:cli-break-sequence-commands`. The `tailf:cli-reset-all-siblings` tells the CLI engine to reset all siblings, also those outside the command sequencing.
-
-
-
-
-
-tailf:cli-reset-container
-
-This annotation can be used on both containers/lists and on leaves, but has slightly different meaning. When used on a container it means that whenever the container is entered, all leaves in it are reset.
-
-If used on a leaf, it should be understood as whenever that leaf is set all other leaves in the container are reset. For example:
-
-```
-// license udi
-container udi {
- tailf:cli-compact-syntax;
- tailf:cli-sequence-commands;
- tailf:cli-reset-container;
- leaf pid {
- type string;
- }
- leaf sn {
- type string;
- }
-}
-container ietf {
- tailf:info "IETF graceful restart";
- container helper {
- tailf:info "helper support";
- presence "helper support";
- leaf disable {
- tailf:cli-reset-container;
- tailf:cli-delete-container-on-delete;
- tailf:info "disable helper support";
- type empty;
- }
- leaf strict-lsa-checking {
- tailf:info "enable helper strict LSA checking";
- type empty;
- }
-}
-```
-
-
-
-
-
-tailf:cli-show-long-obu-diffs
-
-Changes to lists that have the `ordered-by "user"` annotation are shown as insert, delete, and move operations. However, most devices do not support such operations on the lists. In these cases, if you want to insert an element in the middle of a list, you need to first delete all elements following the insertion point, add the new element, and then add all the elements you deleted. The `tailf:cli-show-long-obu-diffs` tells the CLI engine to do exactly this. For example:
-
-```yang
-list foo {
- ordered-by user;
- tailf:cli-show-long-obu-diffs;
- tailf:cli-suppress-mode;
- key id;
- leaf id {
- type string;
- }
-}
-```
-
-If the old configuration is:
-
-```
-foo a
-foo b
-foo c
-foo d
-```
-
-The desired configuration is:
-
-```
-foo a
-foo b
-foo e
-foo c
-foo d
-```
-
-NSO will send the following to the device:
-
-```
-no foo c
-no foo d
-foo e
-foo c
-foo d
-```
-
-An example from the cisco-ios model is:
-
-```
-// ip access-list extended *
-container extended {
- tailf:info "Extended Access List";
- tailf:cli-incomplete-command;
- list ext-named-acl {
- tailf:cli-drop-node-name;
- tailf:cli-full-command;
- tailf:cli-mode-name "config-ext-nacl";
- key name;
- leaf name {
- type ext-acl-type;
- }
- list ext-access-list-rule {
- tailf:cli-suppress-mode;
- tailf:cli-delete-when-empty;
- tailf:cli-drop-node-name;
- tailf:cli-compact-syntax;
- tailf:cli-show-long-obu-diffs;
- ordered-by user;
- key rule;
- leaf rule {
- tailf:cli-drop-node-name;
- tailf:cli-multi-word-key;
- type string {
- tailf:info "deny;;Specify packets to reject\n"+
- "permit;;Specify packets to forwards\n"+
- "remark;;Access list entry comment";
- pattern "(permit.*)|(deny.*)|(no.*)|(remark.*)|([0-9]+.*)";
- }
- }
- }
- }
-}
-```
-
-
-
-
-
-tailf:cli-show-no
-
-One common CLI behavior is to not only show when something is configured but also when it isn't configured by displaying it as `no `. You can tell the CLI engine that you want this behavior by using the `tailf:cli-show-no` annotation. It can be used both on leaves and on presence containers. For example:
-
-```
-// ipv6 cef
-container cef {
- tailf:info "Cisco Express Forwarding";
- tailf:cli-display-separated;
- tailf:cli-show-no;
- presence true;
-}
-```
-
-And,
-
-```
-// interface * / shutdown
-leaf shutdown {
- // Note: default to "no shutdown" in order to be able to bring if up.
- tailf:info "Shutdown the selected interface";
- tailf:cli-full-command;
- tailf:cli-show-no;
- type empty;
-}
-```
-
-However, this is a much more subtle behaviour than one may think and it is not obvious when the `tailf:cli-show-no` and the `tailf:cli-boolean-no` should be used. For example, it would also be possible to model the `shutdown` leaf a boolean value, i.e.:
-
-```
-// interface * / shutdown
-leaf shutdown {
- tailf:cli-boolean-no;
- type boolean;
-}
-```
-
-The problem with the above is that when a new interface is created, say a VLAN interface, the `shutdown` leaf would not be set to anything and you would not send anything to the device. With the `cli-show-no` definition, you would send `no shutdown` since the shutdown leaf would not be defined when a new interface VLAN instance is created.
-
-The boolean version can be tweaked to behave in a similar way using the `default` annotation and `tailf:cli-show-with-default`, i.e.:
-
-```
-// interface * / shutdown
-leaf shutdown {
- tailf:cli-show-with-default;
- tailf:cli-boolean-no;
- type boolean;
- default "false";
-}
-```
-
-The problem with this is that if you explicitly configure the leaf to false in NSO, you will send `no shutdown` to the device (which is fine), but if you then read the config from the device it will not display `no shutdown` since it now has its default setting. This will lead to an out-of-sync situation in NSO. NSO thinks the value should be set to false (which is different from the leaf not being set), whereas the device reports the value as being unset.
-
-The whole situation comes from the fact that NSO and the device treat default values differently. NSO considers a leaf as either being set or not set. If a leaf is set to its default value, it is still considered as set. A leaf must be explicitly deleted for it to become unset. Whereas a typical Cisco device considers a leaf unset if you set it to its default value.
-
-
-
-
-
-tailf:cli-show-with-default
-
-This tells the CLI engine to render a leaf not only when it is actually set, but also when it has its default value. For example:
-
-```yang
-leaf "input" {
- tailf:cli-boolean-no;
- tailf:cli-show-with-default;
- tailf:cli-full-command;
- type boolean;
- default true;
-}
-```
-
-
-
-
-
-tailf:cli-suppress-list-no
-
-Tells the CLI that it should not be possible to delete all lists instances, i.e. the command `no foo` is not allowed, it needs to be `no foo `. For example:
-
-```yang
-list class-map {
- tailf:info "Configure QoS Class Map";
- tailf:cli-mode-name "config-cmap";
- tailf:cli-suppress-list-no;
- tailf:cli-delete-when-empty;
- tailf:cli-no-key-completion;
- tailf:cli-sequence-commands;
- tailf:cli-full-command;
- // class-map *
- key name;
- leaf name {
- tailf:cli-disallow-value "type|match-any|match-all";
- type string {
- tailf:info "WORD;;class-map name";
- }
- }
-}
-```
-
-
-
-
-
-tailf:cli-suppress-mode
-
-By default, all lists are rendered as submodes. This can be suppressed using the `tailf:cli-suppress-mode` annotation. For example, the data model:
-
-```yang
-list foo {
- key id;
- leaf id {
- type string;
- }
- leaf mtu {
- type uint16;
- }
-}
-```
-
-If you have the configuration:
-
-```
-foo a {
- mtu 1400;
-}
-foo b {
- mtu 1500;
-}
-```
-
-It would be rendered as:
-
-```
-foo a
-mtu 1400
-!
-foo b
-mtu 1500
-!
-```
-
-However, if you add `tailf:cli-suppress-mode`:
-
-```yang
-list foo {
- tailf:cli-suppress-mode;
- key id;
- leaf id {
- type string;
- }
- leaf mtu {
- type uint16;
- }
-}
-```
-
-It will be rendered as:
-
-```
-foo a mtu 1400
-foo b mtu 1500
-```
-
-
-
-
-
-tailf:cli-key-format
-
-The format string is used when parsing a key value and when generating a key value for an existing configuration. The key items are numbered from 1-N and the format string should indicate how they are related by using $(X) (where X is the key number). For example:
-
-```yang
-list interface {
- tailf:cli-key-format "$(1)/$(2)/$(3):$(4)";
- key "chassis slot subslot number";
- leaf chassis {
- type uint8 {
- range "1 .. 4";
- }
- }
- leaf slot {
- type uint8 {
- range "1 .. 16";
- }
- }
- leaf subslot {
- type uint8 {
- range "1 .. 48";
- }
- }
- leaf number {
- type uint8 {
- range "1 .. 255";
- }
- }
-}
-```
-
-It will be rendered as:
-
-```
-interface 1/2/3:4
-```
-
-
-
-
-
-tailf:cli-recursive-delete
-
-When generating configuration diffs delete all contents of a container or list before deleting the node. For example:
-
-```yang
-list foo {
- tailf:cli-recursive-delete;
- key "id"";
- leaf id {
- type string;
- }
- leaf a {
- type uint8;
- }
- leaf b {
- type uint8;
- }
- leaf c {
- type uint8;
- }
-}
-```
-
-It will be rendered as:
-
-```bash
-# show full
-foo bar
- a 1
- b 2
- c 3
-!
-# ex
-# no foo bar
-# show configuration
-foo bar
- no a 1
- no b 2
- no c 3
-!
-no foo bar
-#
-```
-
-
-
-
-
-tailf:cli-suppress-no
-
-Specifies that the CLI should not auto-render `no` commands for this element. An element with this annotation will not appear in the completion list to the `no` command. For example:
-
-```yang
-list foo {
- tailf:cli-recursive-delete;
- key "id"";
- leaf id {
- type string;
- }
- leaf a {
- type uint8;
- }
- leaf b {
- tailf:cli-suppress-no;
- type uint8;
- }
- leaf c {
- type uint8;
- }
-}
-```
-
-It will be rendered as:
-
-```
-(config-foo-bar)# no ?
-Possible completions:
- a
- c
- ---
-```
-
-The problem with the above is that the diff will still generate the **no**. To avoid it, you must use the `tailf:cli-no-value-on-delete` and `tailf:cli-no-name-on-delete`.
-
-```
-(config-foo-bar)# no ?
-Possible completions:
- a
- c
- ---
- service Modify use of network based services
-(config-foo-bar)# ex
-(config)# no foo bar
-(config)# show config
-foo bar
- no a 1
- no b 2
- no c 3
-!
-no foo bar
-(config)#
-```
-
-
-
-
-
-tailf:cli-trim-default
-
-Do not display the value if it is the same as default. Please note that this annotation works only in the case of with-defaults basic-mode capability set to `explicit` and the value is explicitly set by the user to the default value. For example:
-
-```yang
-list foo {
- key "id"";
- leaf id {
- type string;
- }
- leaf a {
- type uint8;
- default 1;
- }
- leaf b {
- tailf:cli-trim-default;
- type uint8;
- default 2;
- }
-}
-```
-
-It will be rendered as:
-
-```
-(config)# foo bar
-(config-foo-bar)# a ?
-Possible completions:
- [1]
-(config-foo-bar)# a 2 b ?
-Possible completions:
- [2]
-(config-foo-bar)# a 2 b 3
-(config-foo-bar)# commit
-Commit complete.
-(config-foo-bar)# show full
-foo bar
- a 2
- b 3
-!
-(config-foo-bar)# a 1 b 2
-(config-foo-bar)# commit
-Commit complete.
-(config-foo-bar)# show full
-foo bar
- a 1
-!
-```
-
-
-
-
-
-tailf:cli-embed-no-on-delete
-
-Embed `no` in front of the element name instead of at the beginning of the line. For example:
-
-```yang
-list foo {
- key "id";
- leaf id {
- type string;
- }
- leaf a {
- type uint8;
- }
- container x {
- leaf b {
- type uint8;
- tailf:cli-embed-no-on-delete;
- }
- }
-}
-```
-
-It will be rendered as:
-
-```
-(config-foo-bar)# show full
-foo bar
- a 1
- x b 3
-!
-(config-foo-bar)# no x
-(config-foo-bar)# show conf
-foo bar
- x no b 3
-!
-```
-
-
-
-
-
-tailf:cli-allow-range
-
-This means that the non-integer key should allow range expressions. Can be used in key leafs only. The key must support a range format. The range applies only for matching existing instances. For example:
-
-```yang
-list interface {
- key name;
- leaf name {
- type string;
- tailf:cli-allow-range;
- }
- leaf number {
- type uint32;
- }
-}
-```
-
-It will be rendered as:
-
-```
-(config)# interface eth0-100 number 90
-Error: no matching instances found
-(config)# interface
-Possible completions:
- eth0 eth1 eth2 eth3 eth4 eth5 range
-(config)# interface eth0-3 number 100
-(config-interface-eth0-3)# ex
-(config)# interface eth4-5 number 200
-(config-interface-eth4-5)# commit
-Commit complete.
-(config-interface-eth4-5)# ex
-(config)# do show running-config interface
-interface eth0
- number 100
-!
-interface eth1
- number 100
-!
-interface eth2
- number 100
-!
-interface eth3
- number 100
-!
-interface eth4
- number 200
-!
-interface eth5
- number 200
-!
-```
-
-
-
-
-
-tailf:cli-case-sensitive
-
-Specifies that this node is case-sensitive. If applied to a container or a list, any nodes below will also be case-sensitive. For example:
-
-```yang
-list foo {
- tailf:cli-case-sensitive;
- key "id";
- leaf id {
- type string;
- }
- leaf a {
- type string;
- }
-}
-```
-
-It will be rendered as:
-
-```
-(config)# foo bar a test
-(config-foo-bar)# ex
-(config)# commit
-Commit complete.
-(config)# do show running-config foo
-foo bar
- a test
-!
-(config)# foo bar a Test
-(config-foo-bar)# ex
-(config)# foo Bar a TEST
-(config-foo-Bar)# commit
-Commit complete.
-(config-foo-Bar)# ex
-(config)# do show running-config foo
-foo Bar
- a TEST
-!
-foo bar
- a Test
-!
-```
-
-
-
-
-
-tailf:cli-expose-ns-prefix
-
-When used force the CLI to display the namespace prefix of all children. For example:
-
-```yang
-list foo {
- tailf:cli-expose-ns-prefix;
- key "id"";
- leaf id {
- type string;
- }
- leaf a {
- type uint8;
- }
- leaf b {
- type uint8;
- }
- leaf c {
- type uint8;
- }
-}
-```
-
-It will be rendered as:
-
-```
-(config)# foo bar
-(config-foo-bar)# ?
-Possible completions:
- example:a
- example:b
- example:c
- ---
-```
-
-
-
-
-
-tailf:cli-show-obu-comments
-
-Enforces the CLI engine to generate `insert` comments when displaying configuration changes of `ordered-by user` lists. Should not be used together with `tailf:cli-show-long-obu-diffs`. For example:
-
-```yang
- container policy {
- list policy-list {
- tailf:cli-drop-node-name;
- tailf:cli-show-obu-comments;
- ordered-by user;
- key policyid;
- leaf policyid {
- type uint32 {
- tailf:info "policyid;;Policy ID.";
- }
- }
- leaf-list srcintf {
- tailf:cli-flat-list-syntax {
- tailf:cli-replace-all;
- }
- type string;
- }
- leaf-list srcaddr {
- tailf:cli-flat-list-syntax {
- tailf:cli-replace-all;
- }
- type string;
- }
- leaf-list dstaddr {
- tailf:cli-flat-list-syntax {
- tailf:cli-replace-all;
- }
- type string;
- }
- leaf action {
- type enumeration {
- enum accept {
- tailf:info "Action accept.";
- }
- enum deny {
- tailf:info "Action deny.";
- }
- }
-```
-
-It will be rendered as:
-
-```cli
-admin@ncs(config-policy-4)# commit dry-run outformat cli
-...
- policy {
- policy-list 1 {
- - action accept;
- + action deny;
- }
- + # after policy-list 3
- + policy-list 4 {
- + srcintf aaa;
- + srcaddr bbb;
- + dstaddr ccc;
- + }
- }
- }
- }
- }
- }
-```
-
-
-
-
-
-tailf:cli-multi-line-prompt
-
-This tells the CLI to automatically enter multi-line mode when prompting the user for a value to this leaf. The user must type `` to enter in the multiline mode. For example:
-
-```yang
-leaf message {
- tailf:cli-multi-line-prompt;
- type string;
-}
-```
-
-If configured on the same line, no prompt will appear and it will be rendered as:
-
-```
-(config)# message aaa
-```
-
-If \ typed, it will be rendered as:
-
-```
-(config)# message
-() (aaa):
-[Multiline mode, exit with ctrl-D.]
-> Lorem ipsum dolor sit amet, consectetuer adipiscing elit.
-> Aenean commodo ligula eget dolor. Aenean massa.
-> Cum sociis natoque penatibus et magnis dis parturient montes,
-> nascetur ridiculus mus. Donec quam felis, ultricies nec,
-> pellentesque eu, pretium quis, sem.
->
-(config)# commit
-Commit complete.
-ubuntu(config)# do show running-config message
-message "Lorem ipsum dolor sit amet, consectetuer adipiscing elit. \nAenean
-commodo ligula eget dolor. Aenean massa. \nCum sociis natoque penatibus et
-magnis dis parturient montes, \nnascetur ridiculus mus. Donec quam felis,
-ultricies nec,\n pellentesque eu, pretium quis, sem. \n"
-(config)#
-```
-
-
-
-
-
-tailf:link target
-
-This statement specifies that the data node should be implemented as a link to another data node, called the target data node. This means that whenever the node is modified, the system modifies the target data node instead, and whenever the data node is read, the system returns the value of the target data node. Note that if the data node is a leaf, the target node MUST also be a leaf, and if the data node is a leaf-list, the target node MUST also be a leaf-list. The argument is an XPath absolute location path. If the target lies within lists, all keys must be specified. A key either has a value or is a reference to a key in the path of the source node, using the function `current()` as a starting point for an XPath location path. For example:
-
-```yang
-container foo {
- list bar {
- key id;
- leaf id {
- type uint32;
- }
- leaf a {
- type uint32;
- }
- leaf b {
- tailf:link "/example:foo/example:bar[id=current()/../id]/example:a";
- type uint32;
- }
- }
-}
-```
-
-It will be rendered as:
-
-```
-(config)# foo bar 1
-ubuntu(config-bar-1)# ?
-Possible completions:
- a
- b
- ---
- commit Commit current set of changes
- describe Display transparent command information
- exit Exit from current mode
- help Provide help information
- no Negate a command or set its defaults
- pwd Display current mode path
- top Exit to top level and optionally run command
-(config-bar-1)# b 100
-(config-bar-1)# show config
-foo bar 1
- b 100
-!
-(config-bar-1)# commit
-Commit complete.
-(config-bar-1)# show full
-foo bar 1
- a 100
- b 100
-!
-(config-bar-1)# a 20
-(config-bar-1)# commit
-Commit complete.
-(config-bar-1)# show full
-foo bar 1
- a 20
- b 20
-!
-```
-
-
diff --git a/development/advanced-development/developing-neds/generic-ned-development.md b/development/advanced-development/developing-neds/generic-ned-development.md
deleted file mode 100644
index 6a4a394f..00000000
--- a/development/advanced-development/developing-neds/generic-ned-development.md
+++ /dev/null
@@ -1,220 +0,0 @@
----
-description: Create generic NEDs.
----
-
-# Generic NED Development
-
-As described in previous sections, the CLI NEDs are almost programming-free. The NSO CLI engine takes care of parsing the stream of characters that come from "show running-config \[toptag]" and also automatically produces the sequence of CLI commands required to take the system from one state to another.
-
-A generic NED is required when we want to manage a device that neither speaks NETCONF or SNMP nor can be modeled so that ConfD - loaded with those models - gets a CLI that looks almost/exactly like the CLI of the managed device. For example, devices that have other proprietary CLIs, devices that can only be configured over other protocols such as REST, Corba, XML-RPC, SOAP, other proprietary XML solutions, etc.
-
-In a manner similar to the CLI NED, the Generic NED needs to be able to connect to the device, return the capabilities, perform changes to the device, and finally, grab the entire configuration of the device.
-
-The interface that a Generic NED has to implement is very similar to the interface of a CLI NED. The main differences are:
-
-* When NSO has calculated a diff for a specific managed device, it will for CLI NEDS also calculate the exact set of CLI commands to send to the device, according to the YANG models loaded for the device. In the case of a generic NED, NSO will instead send an array of operations to perform towards the device in the form of DOM manipulations. The generic NED class will receive an array of `NedEditOp` objects. Each `NedEditOp` object contains:
- * The operation to perform, i.e. CREATED, DELETED, VALUE\_SET, etc.
- * The keypath to the object in case.
- * An optional value
-* When NSO wants to sync the configuration from the device to NSO, the CLI NED only has to issue a series of `show running-config [toptag]` commands and reply with the output received from the device. A generic NED has to do more work. It is given a transaction handler, which it must attach to over the Maapi interface. Then the NED code must - by some means - retrieve the entire configuration and write into the supplied transaction, again using the Maapi interface.
-
-Once the generic NED is implemented, all other functions in NSO work precisely in the same manner as with NETCONF and CLI NED devices. NSO still has the capability to run network-wide transactions. The caveat is that to abort a transaction towards a device that doesn't support transactions, we calculate the reverse diff and send it to the device, i.e. we automatically calculate the undo operations.
-
-Another complication with generic NEDs is how the NED class shall authenticate towards the managed device. This depends entirely on the protocol between the NED class and the managed device. If SSH is used to a proprietary CLI, the existing authgroup structure in NSO can be used as is. However, if some other authentication data is needed, it is up to the generic NED implementer to augment the authgroups in `tailf-ncs.yang` accordingly.
-
-We must also configure a managed device, indicating that its configuration is handled by a specific generic NED. Below we see that the NED with identity `xmlrpc` is handling this device.
-
-```cli
-admin@ncs# show running-config devices device x1
-
-address 127.0.0.1
-port 12023
-authgroup default
-device-type generic ned-id xmlrpc
-state admin-state unlocked
-...
-```
-
-The [examples.ncs/device-management/](https://github.com/NSO-developer/nso-examples/tree/6.6/device-management/xmlrpc-device)[generic-xmlrpc-ned](https://github.com/NSO-developer/nso-examples/tree/6.6/device-management/generic-xmlrpc-ned) example in the NSO examples collection implements a generic NED that speaks XML-RPC to 3 HTTP servers. The HTTP servers run the Apache XML-RPC server code and the NED code manipulates the 3 HTTP servers using a number of predefined XML RPC calls.
-
-A good starting point when we wish to implement a new generic NED is the `ncs-make-package --generic-ned-skeleton ...` command, which is used to generate a skeleton package for a generic NED.
-
-```bash
-$ ncs-make-package --generic-ned-skeleton abc --build
-```
-
-```bash
-$ ncs-setup --ned-package abc --dest ncs
-```
-
-```bash
-$ cd ncs
-```
-
-```bash
-$ ncs -c ncs.conf
-```
-
-```bash
-$ ncs_cli -C -u admin
-```
-
-```cli
-admin@ncs# show packages package abc
-packages package abc
-package-version 1.0
-description "Skeleton for a generic NED"
-ncs-min-version [ 3.3 ]
-component MyDevice
- callback java-class-name [ com.example.abc.abcNed ]
- ned generic ned-id abc
- ned device vendor "Acme abc"
- ...
- oper-status up
-```
-
-## Getting Started with a Generic NED
-
-A generic NED always requires more work than a CLI NED. The generic NED needs to know how to map arrays of `NedEditOp` objects into the equivalent reconfiguration operations on the device. Depending on the protocol and configuration capabilities of the device, this may be arbitrarily difficult.
-
-Regardless of the device, we must always write a YANG model that describes the device. The array of `NedEditOp` objects that the generic NED code gets exposed to is relative the YANG model that we have written for the device. Again, this model doesn't necessarily have to cover all aspects of the device.
-
-Often a useful technique with generic NEDs can be to write a pyang plugin to generate code for the generic NED. Again, depending on the device it may be possible to generate Java code from a pyang plugin that covers most or all aspects of mapping an array of `NedEditOp` objects into the equivalent reconfiguration commands for the device.
-
-Pyang is an extensible and open-source YANG parser (written by Tail-f) available at `http://www.yang-central.org`. pyang is also part of the NSO release. A number of plugins are shipped in the NSO release, for example `$NCS_DIR/lib/pyang/pyang/plugins/tree.py` is a good plugin to start with if we wish to write our own plugin.
-
-The [examples.ncs/device-management/](https://github.com/NSO-developer/nso-examples/tree/6.6/device-management/xmlrpc-device)[generic-xmlrpc-ned](https://github.com/NSO-developer/nso-examples/tree/6.6/device-management/generic-xmlrpc-ned) example is a good example to start with if we wish to write a generic NED. It manages a set of devices over the XML-RPC protocol. In this example, we have:
-
-* Defined a fictitious YANG model for the device.
-* Implemented an XML-RPC server exporting a set of RPCs to manipulate that fictitious data model. The XML-RPC server runs the Apache `org.apache.xmlrpc.server.XmlRpcServer` Java package.
-* Implemented a Generic NED which acts as an XML-RPC client speaking HTTP to the XML-RPC servers.
-
-The example is self-contained, and we can, using the NED code, manipulate these XML-RPC servers in a manner similar to all other managed devices.
-
-```bash
-$ cd $NCS_DIR/device-management/xmlrpc-device
-```
-
-```bash
-$ make all start
-```
-
-```bash
-$ ncs_cli -C -u admin
-```
-
-```cli
-admin@ncs# devices sync-from
-sync-result {
- device r1
- result true
-}
-sync-result {
- device r2
- result true
-}
-sync-result {
- device r3
- result true
-}
-```
-
-```cli
-admin@ncs# show running-config devices r1 config
-
-ios:interface eth0
- macaddr 84:2b:2b:9e:af:0a
- ipv4-address 192.168.1.129
- ipv4-mask 255.255.255.0
- status Up
- mtu 1500
- alias 0
- ipv4-address 192.168.1.130
- ipv4-mask 255.255.255.0
- !
- alias 1
- ipv4-address 192.168.1.131
- ipv4-mask 255.255.255.0
- !
-speed 100
-txqueuelen 1000
-!
-```
-
-### Tweaking the Order of `NedEditOp` Objects
-
-As it was mentioned earlier the `NedEditOp` objects are relative to the YANG model of the device, and they are to be translated into the equivalent reconfiguration operations on the device. Applying reconfiguration operations may only be valid in a certain order.
-
-For Generic NEDs, NSO provides a feature to ensure dependency rules are being obeyed when generating a diff to commit. It controls the order of operations delivered in the `NedEditOp` array. The feature is activated by adding the following option to `package-meta-data.xml`:
-
-```xml
-
-```
-
-When the `ordered-diff` flag is set, the `NedEditOp` objects follow YANG schema order and consider dependencies between leaf nodes. Dependencies can be defined using leafrefs and the _`tailf:cli-diff-after`_, _`tailf:cli-diff-create-after`_, _`tailf:cli-diff-modify-after`_, _`tailf:cli-diff-set-after`_, _`tailf:cli-diff-delete-after`_ YANG extensions. Read more about the above YANG extensions in the Tail-f CLI YANG extensions man page.
-
-## NED Commands
-
-A device we wish to manage using a NED usually has not just configuration data that we wish to manipulate from NSO, but the device usually has a set of commands that do not relate to configuration.
-
-The commands on the device we wish to be able to invoke from NSO must be modeled as actions. We model this as actions and compile it using a special `ncsc` command to compile NED data models that do not directly relate to configuration data on the device.
-
-The [examples.ncs/device-management/](https://github.com/NSO-developer/nso-examples/tree/6.6/device-management/xmlrpc-device)[generic-xmlrpc-ned](https://github.com/NSO-developer/nso-examples/tree/6.6/device-management/generic-xmlrpc-ned) example managed device, a fictitious XML-RPC device, contains a YANG snippet:
-
-```yang
-container commands {
- tailf:action idle-timeout {
- tailf:actionpoint ncsinternal {
- tailf:internal;
- }
- input {
- leaf time {
- type int32;
- }
- }
- output {
- leaf result {
- type string;
- }
- }
- }
-}
-```
-
-When that action YANG is imported into NSO it ends up under the managed device. We can invoke the action _on_ the device as :
-
-```cli
-admin@ncs# devices device r1 config ios:commands idle-timeout time 55
-```
-
-```
-result OK
-```
-
-The NED code is obviously involved here. All NEDs must always implement:
-
-```
-void command(NedWorker w, String cmdName, ConfXMLParam[] params)
- throws NedException, IOException;
-```
-
-The `command()` method gets invoked in the NED, the code must then execute the command. The input parameters in the `params` parameter correspond to the data provided in the action. The `command()` method must reply with another array of `ConfXMLParam` objects.
-
-```java
-public void command(NedWorker worker, String cmdname, ConfXMLParam[] p)
- throws NedException, IOException {
- session.setTracer(worker);
- if (cmdname.compareTo("idle-timeout") == 0) {
- worker.commandResponse(new ConfXMLParam[]{
- new ConfXMLParamValue(new interfaces(),
- "result",
- new ConfBuf("OK"))
- });
- }
-```
-
-The above code is fake, on a real device, the job of the `command()` method is to establish a connection to the device, invoke the command, parse the output, and finally reply with an `ConfXMLParam` array.
-
-The purpose of implementing NED commands is usually that we want to expose device commands to the programmatic APIs in the NSO DOM tree.
diff --git a/development/advanced-development/developing-neds/ned-upgrades-and-migration.md b/development/advanced-development/developing-neds/ned-upgrades-and-migration.md
deleted file mode 100644
index 2eccc65a..00000000
--- a/development/advanced-development/developing-neds/ned-upgrades-and-migration.md
+++ /dev/null
@@ -1,21 +0,0 @@
----
-description: Perform NED version upgrades and migration.
----
-
-# NED Upgrades and Migration
-
-Many services in NSO rely on NEDs to perform network provisioning. These services map service-specific configuration to the device data models, provided by the NEDs. As the NED packages can be upgraded independently, they can introduce changes in the device YANG models that cause issues for the services using them.
-
-NSO provides tools to migrate between backward incompatible NED versions. The tools are designed to give you a structured analysis of which paths will change between two NED versions and visibility into the scope of the potential impact that a change in the NED will drive in the service code.
-
-The tools allow for a usage-based analysis of which parts of the NED data model (and instance tree) a particular service has written to. This will give you an (at least opportunistic) sense of which paths must change in the service code.
-
-These features aim to lower the barrier of upgrading NEDs and significantly reduce the amount of uncertainty and side effects that NED upgrades were historically associated with.
-
-## The `migrate` Action
-
-By using the `/ncs:devices/device/migrate` action, you can change the NED major/minor version of a device. The action migrates all configuration and service meta-data. The action can also be executed in parallel on a device group or on all devices matching a NED identity. The procedure for migrating devices is further described in [NED Migration](../../../administration/management/ned-administration.md#sec.ned\_migration).
-
-Additionally, the example [examples.ncs/device-management/ned-migration](https://github.com/NSO-developer/nso-examples/tree/6.6/device-management/ned-migration) in the NSO examples collection illustrates how to migrate devices between different NED versions using the `migrate` action.
-
-What makes it particularly useful to a service developer is that the action reports what paths have been modified and the service instances affected by those changes. This information can then be used to prepare the service code to handle the new NED version. If the `verbose` option is used, all service instances are reported instead of just the service points. If the `dry-run` option is used, the action simply reports what it would do. This gives you the chance to analyze before any actual change is performed.
diff --git a/development/advanced-development/developing-neds/netconf-ned-development.md b/development/advanced-development/developing-neds/netconf-ned-development.md
deleted file mode 100644
index 439cb2ec..00000000
--- a/development/advanced-development/developing-neds/netconf-ned-development.md
+++ /dev/null
@@ -1,653 +0,0 @@
----
-description: Create NETCONF NEDs.
----
-
-# NETCONF NED Development
-
-Creating and installing a NETCONF NED consists of the following steps:
-
-* Make the device YANG data models available to NSO
-* Build the NED package from the YANG data models using NSO tools
-* Install the NED with NSO
-* Configure the device connection and notification events in NSO
-
-Creating a NETCONF NED that uses the built-in NSO NETCONF client can be a pleasant experience with devices and nodes that strictly follow the specification for the NETCONF protocol and YANG mappings to NETCONF. If the device does not, the smooth sailing will quickly come to a halt, and you are recommended to visit the [NED Administration](../../../administration/management/ned-administration.md) in Administration and get help from the Cisco NSO NED team who can diagnose, develop and maintain NEDs that bypass misbehaving devices special quirks.
-
-## Tools for NETCONF NED Development
-
-Before NSO can manage a NETCONF-capable device, a corresponding NETCONF NED needs to be loaded. While no code needs to be written for such NED, it must contain YANG data models for this kind of device. While in some cases, the YANG models may be provided by the device's vendor, devices that implement RFC 6022 YANG Module for NETCONF Monitoring can provide their YANG models using the functionality described in this RFC.
-
-The NSO example under [examples.ncs/device-management/netconf-ned](https://github.com/NSO-developer/nso-examples/tree/6.6/device-management/netconf-ned) implements two shell scripts that use different tools to build a NETCONF NED from a simulated hardware chassis system controller device.
-
-### **The `netconf-console` and `ncs-make-package` Tools**
-
-The `netconf-console` NETCONF client tool is a Python script that can be used for testing, debugging, and simple client duties. For example, making the device YANG models available to NSO using the NETCONF IETF RFC 6022 `get-schema` operation to download YANG modules and the RFC 6241`get` operation, where the device implements the RFC 7895 YANG module library to provide information about all the YANG modules used by the NETCONF server. Type `netconf-console -h` for documentation.
-
-Once the required YANG models are downloaded or copied from the device, the `ncs-make-package` bash script tool can be used to create and build, for example, the NETCONF NED package. See [ncs-make-package(1)](../../../resources/man/ncs-make-package.1.md) in Manual Pages and `ncs-make-package -h` for documentation.
-
-The `demo.sh` script in the `netconf-ned` example uses the `netconf-console` and `ncs-make-package` combination to create, build, and install the NETCONF NED. When you know beforehand which models you need from the device, you often begin with this approach when encountering a new NETCONF device.
-
-### **The NETCONF NED Builder Tool**
-
-The NETCONF NED builder uses the functionality of the two previous tools to assist the NSO developer onboard NETCONF devices by fetching the YANG models from a device and building a NETCONF NED using CLI commands as a frontend.
-
-The `demo_nb.sh` script in the `netconf-ned` example uses the NSO CLI NETCONF NED builder commands to create, build, and install the NETCONF NED. This tool can be beneficial for a device where the YANG models are required to cover the dependencies of the must-have models. Also, devices known to have behaved well with previous versions can benefit from using this tool and its selection profile and production packaging features.
-
-## Using the **`netconf-console`** and **`ncs-make-package`** Combination
-
-For a demo of the steps below, see the README in the [examples.ncs/device-management/netconf-ned](https://github.com/NSO-developer/nso-examples/tree/6.6/device-management/netconf-ned) example and run the demo.sh script.
-
-### **Make the Device YANG Data Models Available to NSO**
-
-List the YANG version 1.0 models the device supports using NETCONF `hello` message.
-
-```bash
-$ netconf-console --port $DEVICE_NETCONF_PORT --hello | grep "module="
-http://tail-f.com/ns/aaa/1.1?module=tailf-aaa&revision=2023-04-13
-http://tail-f.com/ns/common/query?module=tailf-common-query&revision=2017-12-15
-http://tail-f.com/ns/confd-progress?module=tailf-confd-progress&revision=2020-06-29
-...
-urn:ietf:params:xml:ns:yang:ietf-yang-metadata?module=ietf-yang-metadata&revision=2016-08-05
-urn:ietf:params:xml:ns:yang:ietf-yang-types?module=ietf-yang-types&revision=2013-07-15
-```
-
-List the YANG version 1.1 models supported by the device from the device yang-library.
-
-```bash
-$ netconf-console --port=$DEVICE_NETCONF_PORT --get -x /yang-library/module-set/module/name
-
-
-
-
-
- common
-
- iana-crypt-hash
-
-
- ietf-hardware
-
-
- ietf-netconf
-
-
- ietf-netconf-acm
-
-
- ...
-
- tailf-yang-patch
-
-
- timestamp-hardware
-
-
-
-
-
-```
-
-The `ietf-hardware.yang` model is of interest to manage the device hardware. Use the `netconf-console` NETCONF `get-schema` operation to get the `ietf-hardware.yang` model.
-
-```bash
-$ netconf-console --port=$DEVICE_NETCONF_PORT \
- --get-schema=ietf-hardware > dev-yang/ietf-hardware.yang
-```
-
-The `ietf-hardware.yang` import a few YANG models.
-
-```bash
-$ cat dev-yang/ietf-hardware.yang | grep import
- \
- dev-yang/iana-hardware.yang
-```
-
-The `timestamp-hardware.yang` module augments a node onto the `ietf-hardware.yang` model. This is not visible in the YANG library. Therefore, information on the augment dependency must be available, or all YANG models must be downloaded and checked for imports and augments of the `ietf-hardware.yang model` to make use of the augmented node(s).
-
-```bash
-$ netconf-console --port=$DEVICE_NETCONF_PORT --get-schema=timestamp-hardware > \
- dev-yang/timestamp-hardware.yang
-```
-
-### **Build the NED from the YANG Data Models**
-
-Create and build the NETCONF NED package from the device YANG models using the `ncs-make-package` script.
-
-```bash
-$ ncs-make-package --netconf-ned dev-yang --dest nso-rundir/packages/devsim --build \
- --verbose --no-test --no-java --no-netsim --no-python --no-template --vendor "Tail-f" \
- --package-version "1.0" devsim
-```
-
-If you make any changes to, for example, the YANG models after creating the package above, you can rebuild the package using `make -C nso-rundir/packages/devsim all`.
-
-### **Configure the Device Connection**
-
-Start NSO. NSO will load the new package. If the package was loaded previously, use the `--with-package-reload` option. See [ncs(1)](../../../resources/man/ncs.1.md) in Manual Pages for details. If NSO is already running, use the `packages reload` CLI command.
-
-```bash
-$ ncs --cd ./nso-rundir
-```
-
-As communication with the devices being managed by NSO requires authentication, a custom authentication group will likely need to be created with mapping between the NSO user and the remote device username and password, SSH public-key authentication, or external authentication. The example used here has a 1-1 mapping between the NSO admin user and the ConfD-enabled simulated device admin user for both username and password.
-
-In the example below, the device name is set to `hw0`, and as the device here runs on the same host as NSO, the NETCONF interface IP address is 127.0.0.1 while the port is set to 12022 to not collide with the NSO northbound NETCONF port. The standard NETCONF port, 830, is used for production.
-
-The `default` authentication group, as shown above, is used.
-
-```bash
-$ ncs_cli -u admin -C
-# config
-Entering configuration mode terminal
-(config)# devices device hw0 address 127.0.0.1 port 12022 authgroup default
-(config-device-hw0)# devices device hw0 trace pretty
-(config-device-hw0)# state admin-state unlocked
-(config-device-hw0)# device-type netconf ned-id devsim-nc-1.0
-(config-device-hw0)# commit
-Commit complete.
-```
-
-Fetch the public SSH host key from the device and sync the configuration covered by the `ietf-hardware.yang` from the device.
-
-```bash
-$ ncs_cli -u admin -C
-# devices fetch-ssh-host-keys
-fetch-result {
- device hw0
- result updated
- fingerprint {
- algorithm ssh-ed25519
- value 00:11:22:33:44:55:66:77:88:99:aa:bb:cc:dd:ee:ff
- }
-}
-# device device hw0 sync-from
-result true
-```
-
-NSO can now configure the device, state data can be read, actions can be executed, and notifications can be received. See the [examples.ncs/device-management/netconf-ned](https://github.com/NSO-developer/nso-examples/tree/6.6/device-management/netconf-ned) `demo.sh` example script for a demo.
-
-## Using the NETCONF NED Builder Tool
-
-For a demo of the steps below, see README in the [examples.ncs/device-management/netconf-ned](https://github.com/NSO-developer/nso-examples/tree/6.6/device-management/netconf-ned) example and run the `demo_nb.sh` script.
-
-### **Configure the Device Connection**
-
-As communication with the devices being managed by NSO requires authentication, a custom authentication group will likely need to be created with mapping between the NSO user and the remote device username and password, SSH public-key authentication, or external authentication.
-
-The example used here has a 1-1 mapping between the NSO admin user and the ConfD-enabled simulated device admin user for both username and password.
-
-```cli
-admin@ncs# show running-config devices authgroups group
-devices authgroups group default
- umap admin
- remote-name admin
- remote-password $9$xrr1xtyI/8l9xm9GxPqwzcEbQ6oaK7k5RHm96Hkgysg=
- !
- umap oper
- remote-name oper
- remote-password $9$Pr2BRIHRSWOW2v85PvRGvU7DNehWL1hcP3t1+cIgaoE=
- !
-!
-```
-
-In the example below, the device name is set to `hw0`, and as the device here runs on the same host as NSO, the NETCONF interface IP address is 127.0.0.1 while the port is set to 12022 to not collide with the NSO northbound NETCONF port. The standard NETCONF port, 830, is used for production.
-
-The `default` authentication group, as shown above, is used.
-
-```bash
-# config
-Entering configuration mode terminal
-(config)# devices device hw0 address 127.0.0.1 port 12022 authgroup default
-(config-device-hw0)# devices device hw0 trace pretty
-(config-device-hw0)# state admin-state unlocked
-(config-device-hw0)# device-type netconf ned-id netconf
-(config-device-hw0)# commit
-```
-
-{% hint style="info" %}
-A temporary NED identity is configured to `netconf` as the NED package has not yet been built. It will be changed to match the NETCONF NED package NED ID once the package is installed. The generic `netconf` ned-id allows NSO to connect to the device for basic NETCONF operations, such as `get` and `get-schema` for listing and downloading YANG models from the device.
-{% endhint %}
-
-### **Make the Device YANG Data Models Available to NSO**
-
-Create a NETCONF NED Builder project called `hardware` for the device, here named `hw0`.
-
-```bash
-# devtools true
-# config
-(config)# netconf-ned-builder project hardware 1.0 device hw0 local-user admin vendor Tail-f
-(config)# commit
-(config)# end
-# show netconf-ned-builder project hardware
-netconf-ned-builder project hardware 1.0
- download-cache-path /path/to/nso/examples.ncs/device-management/netconf-ned/nso-rundir/
- state/netconf-ned-builder/cache/hardware-nc-1.0
- ned-directory-path /path/to/nso/examples.ncs/device-management/netconf-ned/nso-rundir/
- state/netconf-ned-builder/hardware-nc-1.0
-```
-
-The NETCONF NED Builder is a developer tool that must be enabled first through the `devtools true` command. The NETCONF NED Builder feature is not expected to be used by the end users of NSO.
-
-The cache directory above is where additional YANG and YANG annotation files can be added in addition to the ones downloaded from the device. Files added need to be configured with the NED builder to be included with the project, as described below.
-
-The project argument for the `netconf-ned-builder` command requires both the project name and a version number for the NED being built. A version number often picked is the version number of the device software version to match the NED to the device software it is tested with. NSO uses the project name and version number to create the NED name, here `hardware-nc-1.0`. The device's name is linked to the device name configured for the device connection.
-
-#### Copying Manually to the Cache Directory
-
-{% hint style="info" %}
-This step is not required if the device supports the NETCONF `get-schema` operation and all YANG modules can be retrieved from the device. Otherwise, you copy the YANG models to the `state/netconf-ned-builder/cache/hardware-nc-1.0` directory for use with the device.
-{% endhint %}
-
-After downloading the YANG data models and before building the NED with the NED builder, you need to register the YANG module with the NSO NED builder. For example, if you want to include a `dummy.yang` module with the NED, you first copy it to the cache directory and then, for example, create an XML file for use with the `ncs_load` command to update the NSO CDB operational datastore:
-
-```bash
-$ cp dummy.yang $NCS_DIR/examples.ncs/device-management/netconf-ned/\
- nso-rundir/state/netconf-ned-builder/cache/hardware-nc-1.0/
-$ cat dummy.xml
-
-
-
- hardware
- 1.0
-
- dummy
- 2023-11-10
- NETCONF
- selected downloaded
-
-
-
-
-$ ncs_load -O -m -l dummy.xml
-$ ncs_cli -u admin -C
-# devtools true
-# show netconf-ned-builder project hardware 1.0 module dummy 2023-11-10
-SELECT BUILD BUILD
-NAME REVISION NAMESPACE FEATURE LOCATION STATUS
------------------------------------------------------------------------
-dummy 2023-11-10 - - [ NETCONF ] selected,downloaded
-```
-
-#### Adding YANG Annotation Files
-
-In some situations, you want to annotate the YANG data models that were downloaded from the device. For example, when an encrypted string is stored on the device, the encrypted value that is stored on the device will differ from the value stored in NSO if the two initialization vectors differ.
-
-Say you have a YANG data model:
-
-```yang
-module dummy {
- namespace "urn:dummy";
- prefix dummy;
-
- revision 2023-11-10 {
- description
- "Initial revision.";
- }
-
- grouping my-grouping {
- container my-container {
- leaf my-encrypted-password {
- type tailf:aes-cfb-128-encrypted-string;
- }
- }
- }
-}
-```
-
-And create a YANG annotation module:
-
-```yang
-module dummy-ann {
- namespace "urn:dummy-ann";
- prefix dummy-ann;
-
- import tailf-common {
- prefix tailf;
- }
- tailf:annotate-module "dummy" {
- tailf:annotate-statement "grouping[name='my-grouping']" {
- tailf:annotate-statement "container[name='my-container']" {
- tailf:annotate-statement "leaf[name=' my-encrypted-password']" {
- tailf:ned-ignore-compare-config;
- }
- }
- }
- }
-}
-```
-
-After downloading the YANG data models and before building the NED with the NED builder, you need to register the `dummy-ann.yang` annotation module, as was done above with the XML file for the `dummy.yang` module.
-
-#### Using NETCONF `get-schema` with the NED Builder
-
-If the device supports `get-schema` requests, the device can be contacted directly to download the YANG data models. The hardware system example returns the below YANG source files when the NETCONF `get-schema` operation is issued to the device from NSO. Only a subset of the list is shown.
-
-```bash
-$ ncs_cli -u admin -C
-# devtools true
-# devices fetch-ssh-host-keys
-fetch-result {
- device hw0
- result updated
- fingerprint {
- algorithm ssh-ed25519
- value 00:11:22:33:44:55:66:77:88:99:aa:bb:cc:dd:ee:ff
- }
-}
-# netconf-ned-builder project hardware 1.0 fetch-module-list
-# show netconf-ned-builder project hardware 1.0 module
-module iana-crypt-hash 2014-08-06
- namespace urn:ietf:params:xml:ns:yang:iana-crypt-hash
- feature [ crypt-hash-md5 crypt-hash-sha-256 crypt-hash-sha-512 ]
- location [ NETCONF ]
-module iana-hardware 2018-03-13
- namespace urn:ietf:params:xml:ns:yang:iana-hardware
- location [ NETCONF ]
-module ietf-datastores 2018-02-14
- namespace urn:ietf:params:xml:ns:yang:ietf-datastores
- location [ NETCONF ]
-module ietf-hardware 2018-03-13
- namespace urn:ietf:params:xml:ns:yang:ietf-hardware
- location [ NETCONF ]
-module ietf-inet-types 2013-07-15
- namespace urn:ietf:params:xml:ns:yang:ietf-inet-types
- location [ NETCONF ]
-module ietf-interfaces 2018-02-20
- namespace urn:ietf:params:xml:ns:yang:ietf-interfaces
- feature [ arbitrary-names if-mib pre-provisioning ]
- location [ NETCONF ]
-module ietf-ip 2018-02-22
- namespace urn:ietf:params:xml:ns:yang:ietf-ip
- feature [ ipv4-non-contiguous-netmasks ipv6-privacy-autoconf ]
- location [ NETCONF ]
-module ietf-netconf 2011-06-01
- namespace urn:ietf:params:xml:ns:netconf:base:1.0
- feature [ candidate confirmed-commit rollback-on-error validate xpath ]
- location [ NETCONF ]
-module ietf-netconf-acm 2018-02-14
- namespace urn:ietf:params:xml:ns:yang:ietf-netconf-acm
- location [ NETCONF ]
-module ietf-netconf-monitoring 2010-10-04
- namespace urn:ietf:params:xml:ns:yang:ietf-netconf-monitoring
- location [ NETCONF ]
-...
-module ietf-yang-types 2013-07-15
- namespace urn:ietf:params:xml:ns:yang:ietf-yang-types
- location [ NETCONF ]
-module tailf-aaa 2023-04-13
- namespace http://tail-f.com/ns/aaa/1.1
- location [ NETCONF ]
-module tailf-acm 2013-03-07
- namespace http://tail-f.com/yang/acm
- location [ NETCONF ]
-module tailf-common 2023-10-16
- namespace http://tail-f.com/yang/common
- location [ NETCONF ]
-...
-module timestamp-hardware 2023-11-10
- namespace urn:example:timestamp-hardware
- location [ NETCONF ]
-```
-
-The `fetch-ssh-host-key` command fetches the public SSH host key from the device to set up NETCONF over SSH. The `fetch-module-list` command will look for existing YANG modules in the download-cache-path folder, YANG version 1.0 models in the device NETCONF `hello` message, and issue a `get` operation to look for YANG version 1.1 models in the device `yang-library`. The `get-schema` operation fetches the YANG modules over NETCONF and puts them in the download-cache-path folder.
-
-After the list of YANG modules is fetched, the retrieved list of modules can be shown. Select the ones you want to download and include in the NETCONF NED.
-
-When you select a module with dependencies on other modules, the modules dependent on are automatically selected, such as those listed below for the `ietf-hardware` module including `iana-hardware` `ietf-inet-types` and `ietf-yang-types`. To select all available modules, use the wild card for both fields. Use the `deselect` command to exclude modules previously included from the build.
-
-```bash
-$ ncs_cli -u admin -C
-# devtools true
-# netconf-ned-builder project hardware 1.0 module ietf-hardware 2018-03-13 select
-# netconf-ned-builder project hardware 1.0 module timestamp-hardware 2023-11-10 select
-# show netconf-ned-builder project hardware 1.0 module status
-NAME REVISION STATUS
------------------------------------------------------
-iana-hardware 2018-03-13 selected,downloaded
-ietf-hardware 2018-03-13 selected,downloaded
-ietf-inet-types 2013-07-15 selected,pending
-ietf-yang-types 2013-07-15 selected,pending
-timestamp-hardware 2023-11-10 selected,pending
-```
-
-Waiting for NSO to download the selected YANG models (see the `demo_nb.sh` script for details)
-
-```bash
-NAME REVISION STATUS
------------------------------------------------------
-iana-hardware 2018-03-13 selected,downloaded
-ietf-hardware 2018-03-13 selected,downloaded
-ietf-inet-types 2013-07-15 selected,downloaded
-ietf-yang-types 2013-07-15 selected,downloaded
-timestamp-hardware 2023-11-10 selected,downloaded
-```
-
-#### Principles of Selecting the YANG Modules
-
-Before diving into more details, the principles of selecting the modules for inclusion in the NED are crucial steps in building the NED and deserve to be highlighted.
-
-The best practice recommendation is to select only the modules necessary to perform the tasks for the given NSO deployment to reduce memory consumption, for example, for the `sync-from` command, and improve upgrade wall-clock performance.
-
-For example, suppose the aim of the NSO installation is exclusively to manage BGP on the device, and the necessary configuration is defined in a separate module. In that case, only this module and its dependencies need to be selected. If several services are running within the NSO deployment, it will be necessary to include more data models in the single NED that may serve one or many devices. However, if the NSO installation is used to, for example, take a full backup of the device's configuration, all device modules need to be included with the NED.
-
-Selecting a module will also require selecting the module's dependencies, namely, modules imported by the selected modules, modules that augment the selected modules with the required functionality, and modules known to deviate from the selected module in the device's implementation.
-
-Avoid selecting YANG modules that overlap where, for example, configuring one leaf will update another. Including both will cause NSO to get out of sync with the device after a NETCONF `edit-config` operation, forcing time-consuming sync operations.
-
-### **Build the NED from the YANG Data Models**
-
-An NSO NED is a package containing the device YANG data models. The NED package must first be built, then installed with NSO, and finally, the package must be loaded for NSO to communicate with the device via NETCONF using the device YANG data models as the schema for what to configure, state to read, etc.
-
-After the files have been downloaded from the device, they must be built before being used. The following example shows how to build a NED for the `hw0` device.
-
-```
-# devtools true
-# netconf-ned-builder project hardware 1.0 build-ned
-# show netconf-ned-builder project hardware 1.0 build-status
-build-status success
-# show netconf-ned-builder project hardware 1.0 module build-warning
-% No entries found.
-# show netconf-ned-builder project hardware 1.0 module build-error
-% No entries found.
-# unhide debug
-# show netconf-ned-builder project hardware 1.0 compiler-output
-% No entries found.
-```
-
-{% hint style="info" %}
-Build errors can be found in the `build-error` leaf under the module list entry. If there are errors in the build, resolve the issues in the YANG models, update them and their revision on the device, and download them from the device or place the YANG models in the cache as described earlier.
-{% endhint %}
-
-Warnings after building the NED can be found in the `build-warning` leaf under the module list entry. It is good practice to clean up build warnings in your YANG models.
-
-A build error example:
-
-```bash
-# netconf-ned-builder project cisco-iosxr 6.6 build-ned
-Error: Failed to compile NED bundle
-# show netconf-ned-builder project cisco-iosxr 6.6 build-status
-build-status error
-# show netconf-ned-builder project cisco-iosxr 6.6 module build-error
-module openconfig-telemetry 2016-02-04
- build-error at line 700:
-```
-
-The full compiler output for debugging purposes can be found in the `compiler-output` leaf under the project list entry. The `compiler-output` leaf is hidden by `hide-group debug` and may be accessed in the CLI using the `unhide debug` command if the `hide-group` is configured in `ncs.conf`. Example `ncs.conf` config:
-
-```xml
-
- debug
-
-```
-
-For the `ncs.conf` configuration change to take effect, it must be either reloaded or NSO restarted. A reload using the `ncs_cmd` tool:
-
-```bash
-$ ncs_cmd -c reload
-```
-
-As the compilation will halt if an error is found in a YANG data model, it can be helpful to first check all YANG data models at once using a shell script plus the NSO yanger tool.
-
-```bash
-$ ls -1
-check.sh
-yang # directory with my YANG modules
-$ cat check.sh
-#!/bin/sh
-for f in yang/*.yang
-do
- $NCS_DIR/bin/yanger -p yang $f
-done
-```
-
-As an alternative to debugging the NED building issues inside an NSO CLI session, the `make-development-ned` action creates a development version of NED, which can be used to debug and fix the issue in the YANG module.
-
-```bash
-$ ncs_cli -u admin -C
-# devtools true
-(config)# netconf-ned-builder project hardware 1.0 make-development-ned in-directory /tmp
-ned-path /tmp/hardware-nc-1.0
-(config)# end
-# exit
-$ cd /tmp/hardware-nc-1.0/src
-$ make clean all
-```
-
-YANG data models that do not compile due to YANG RFC compliance issues can either be updated in the cache folder directly or in the device and re-uploaded again through `get-schema` operation by removing them from the cache folder and repeating the previous process to rebuild the NED. The YANG modules can be deselected from the build if they are not needed for your use case.
-
-{% hint style="info" %}
-Having device vendors update their YANG models to comply with the NETCONF and YANG standards can be time-consuming. Visit the [NED Administration](../../../administration/management/ned-administration.md) and get help from the Cisco NSO NED team, who can diagnose, develop and maintain NEDs that bypass misbehaving device's special quirks.
-{% endhint %}
-
-### **Export the NED Package and Load**
-
-A successfully built NED may be exported as a `.tar` file using the `export-ned action`. The `tar` file name is constructed according to the naming convention below.
-
-```bash
-ncs---nc-.tar.gz
-```
-
-The user chooses the directory the file needs to be created in. The user must have write access to the directory. I.e., configure the NSO user with the same uid (id -u) as the non-root user:
-
-```bash
-$ id -u
-501
-$ ncs_cli -u admin -C
-# devtools true
-# config
-(config)# aaa authentication users user admin uid 501
-(config-user-admin)# commit
-Commit complete.
-(config-user-admin)# end
-# netconf-ned-builder project hardware 1.0 export-ned to-directory \
- /path/to/nso/examples.ncs/device-management/netconf-ned/nso-rundir/packages
-tar-file /path/to/nso/examples.ncs/device-management/netconf-ned/
- nso-rundir/packages/ncs-6.2-hardware-nc-1.0.tar.gz
-```
-
-When the NED package has been copied to the NSO run-time packages directory, the NED package can be loaded by NSO.
-
-```bash
-# packages reload
->>>> System upgrade is starting.
->>>> Sessions in configure mode must exit to operational mode.
->>>> No configuration changes can be performed until upgrade has completed.
->>>> System upgrade has completed successfully.
-reload-result {
- package hardware-nc-1.0
- result true
-}
-# show packages | nomore
-packages package hardware-nc-1.0
- package-version 1.0
- description "Generated by NETCONF NED builder"
- ncs-min-version [ 6.2 ]
- directory ./state/packages-in-use/1/hardware-nc-1.0
- component hardware
- ned netconf ned-id hardware-nc-1.0
- ned device vendor Tail-f
- oper-status up
-```
-
-### **Update the `ned-id` for the `hw0` Device**
-
-When the NETCONF NED has been built for the `hw0` device, the `ned-id` for `hw0` needs to be updated before the NED can be used to manage the device.
-
-```bash
-$ ncs_cli -u admin -C
-# show packages package hardware-nc-1.0 component hardware ned netconf ned-id
-ned netconf ned-id hardware-nc-1.0
-# config
-(config)# devices device hw0 device-type netconf ned-id hardware-nc-1.0
-(config-device-hw0)# commit
-Commit complete.
-(config-device-hw0)# end
-# devices device hw0 sync-from
-result true
-# show running-config devices device hw0 config | nomore
-devices device hw0
- config
- hardware component carbon
- class module
- parent slot-1-4-1
- parent-rel-pos 1040100
- alias dummy
- asset-id dummy
- uri [ urn:dummy ]
- !
- hardware component carbon-port-4
- class port
- parent carbon
- parent-rel-pos 1040104
- alias dummy-port
- asset-id dummy
- uri [ urn:dummy ]
- !
-...
-```
-
-NSO can now configure the device, state data can be read, actions can be executed, and notifications can be received. See the [examples.ncs/device-management/netconf-ned](https://github.com/NSO-developer/nso-examples/tree/6.6/device-management/netconf-ned) `demo_nb.sh` example script for a demo.
-
-### **Remove a NED from NSO**
-
-Installed NED packages can be removed from NSO by deleting them from the NSO project's packages folder and then deleting the device and the NETCONF NED project through the NSO CLI. To uninstall a NED built for the device `hw0`:
-
-```bash
-$ ncs_cli -C -u admin
-# devtools true
-# config
-(config)# no netconf-ned-builder project hardware 1.0
-(config)# commit
-Commit complete.
-(config)# end
-# packages reload
-Error: The following modules will be deleted by upgrade:
-hardware-nc-1.0: iana-hardware
-hardware-nc-1.0: ietf-hardware
-hardware-nc-1.0: hardware-nc
-hardware-nc-1.0: hardware-nc-1.0
-If this is intended, proceed with 'force' parameter.
-# packages reload force
-
->>>> System upgrade is starting.
->>>> Sessions in configure mode must exit to operational mode.
->>>> No configuration changes can be performed until upgrade has completed.
->>>> System upgrade has completed successfully.
-```
diff --git a/development/advanced-development/developing-neds/snmp-ned.md b/development/advanced-development/developing-neds/snmp-ned.md
deleted file mode 100644
index 66fde8c1..00000000
--- a/development/advanced-development/developing-neds/snmp-ned.md
+++ /dev/null
@@ -1,331 +0,0 @@
----
-description: Description of SNMP NED.
----
-
-# SNMP NED
-
-NSO can use SNMP to configure a managed device, under certain circumstances. SNMP in general is not suitable for configuration, and it is important to understand why:
-
-* In SNMP, the size of a SET request, which is used to write to a device, is limited to what fits into one UDP packet. This means that a large configuration change must be split into many packets. Each such packet contains some parameters to set, and each such packet is applied on its own by the device. If one SET request out of many fails, there is no abort command to undo the already applied changes, meaning that rollback is very difficult.
-* The data modeling language used in SNMP, SMIv2, does not distinguish between configuration objects and other writable objects. This means that it is not possible to retrieve only the configuration from a device without explicit, exact knowledge of all objects in all MIBs supported by the device.
-* SNMP supports only two basic operations, read and write. There is no protocol support for creating or deleting data. Such operations must be modeled in the MIBs, explicitly.
-* SMIv2 has limited support for semantic constraints in the data model. This means that it is difficult to know if a certain configuration will apply cleanly on a device. If it doesn't, rollback is tricky, as explained above.
-* Because of all of the above, ordering of SET requests becomes very important. If a device refuses to create some object A before another B, an SNMP manager must make sure to create B before creating A. It is also common that objects cannot be modified without first making them disabled or inactive. There is no standard way to do this, so again, different data models do this in different ways.
-
-Despite all this, if a device can be configured over SNMP, NSO can use its built-in multilingual SNMP manager to communicate with the device. However, to solve the problems mentioned above, the MIBs supported by the device need to be carefully annotated with some additional information that instructs NSO on how to write configuration data to the device. This additional information is described in detail below.
-
-## Overview
-
-To add a device, the following steps need to be followed. They are described in more detail in the following sections.
-
-* Collect (a subset of) the MIBs supported by the device.
-* Optionally, annotate the MIBs with annotations to instruct NSO on how to talk to the device, for example, ordering dependencies that are not explicitly modeled in the MIB. This step is not required.
-* Compile the MIBs and load them into NSO.
-* Configure NSO with the address and authentication parameter for the SNMP devices.
-* Optionally configure a named MIB group in NSO with the MIBs supported by the device, and configure the managed device in NSO to use this MIB group. If this step is not done, NSO assumes the device implements all MIBs known to NSO.
-
-## Compiling and Loading MIBs
-
-(See the `Makefile` in the [examples.ncs/device-management/snmp-ned](https://github.com/NSO-developer/nso-examples/tree/6.6/device-management/snmp-ned) example under `packages/ex-snmp-ned/src/Makefile`, for an example of the below description.) Make sure that you have all MIBs available, including import dependencies, and that they contain no errors.
-
-The `ncsc --ncs-compile-mib-bundle` compiler is used to compile MIBs and MIB annotation files into NSO load files. Assuming a directory with input MIB files (and optional MIB annotation files) exist, the following command compiles all the MIBs in `device-models` and writes the output to `ncs-device-model-dir`.
-
-```bash
-$ ncsc --ncs-compile-mib-bundle device-models \
- --ncs-device-dir ./ncs-device-model-dir
-```
-
-The compilation steps performed by the `ncsc --ncs-compile-mib-bundle` are elaborated below:
-
-1. Transform the MIBs into YANG according to the IETF standardized mapping ([https://www.ietf.org/rfc/rfc6643.txt](https://www.ietf.org/rfc/rfc6643.txt)). The IETF-defined mapping makes all MIB objects read-only over NETCONF.
-2. Generate YANG deviations from the MIB, this makes SMIv2 `read-write` objects YANG `config true` as a YANG deviation.
-3. Include the optional MIB annotations.
-4. Merge the read-only YANG from step 1 with the read-write deviation from step 2.
-5. Compile the merged YANG files into NSO load format.
-
-These steps are illustrated in the figure below:
-
-
SNMP NED Compile Steps
-
-Finally make sure that the NSO configuration file points to the correct device model directory:
-
-```xml
-./ncs-device-model-dir
-```
-
-## Configuring NSO to Speak SNMP Southbound
-
-Each managed device is configured with a name, IP address, and port (161 by default), and the SNMP version to use (v1, v2c, or v3).
-
-```cli
-admin@host# show running-config devices device r3
-
-address 127.0.0.1
-port 2503
-device-type snmp version v3 snmp-authgroup my-authgroup
-state admin-state unlocked
-```
-
-To minimize the necessary configuration, the authentication group concept (see [Authentication Groups](../../../operation-and-usage/operations/nso-device-manager.md#user_guide.devicemanager.authgroups)) is used also for SNMP. A configured managed device of the type `snmp` refers to an SNMP authgroup. An SNMP authgroup contains community strings for SNMP v1 and v2c and USM parameters for SNMP v3.
-
-```cli
-admin@host# show running-config devices authgroups snmp-group my-authgroup
-
-devices authgroups snmp-group my-authgroup
- default-map community-name public
- umap admin
- usm remote-name admin
- usm security-level auth-priv
- usm auth md5 remote-password $4$wIo7Yd068FRwhYYI0d4IDw==
- usm priv des remote-password $4$wIo7Yd068FRwhYYI0d4IDw==
- !
-!
-```
-
-In the example above, when NSO needs to speak to the device `r3`, it sees that the device is of type `snmp`, and that SNMP v3 should be used with authentication parameters from the SNMP authgroup `my-authgroup`. This authgroup maps the local NSO user `admin` to the USM user `admin`, with explicit remote passwords given. These passwords will be localized for each SNMP engine that NSO communicates with. While the passwords above are shown encrypted, when you enter them in the CLI you write them in clear text. Note also that the remote engine ID is not configured; NSO performs a discovery process to find it automatically.
-
-No NSO user other than `admin` is mapped by the `authgroup my-authgroup` for SNMP v3.
-
-## **Configure MIB Groups**
-
-With SNMP, there is no standardized, generic way for an SNMP manager to learn which MIBs an SNMP agent implements. By default, NSO assumes that an SNMP device implements all MIBs known to NSO, i.e., all MIBs that have been compiled with the `ncsc --ncs-compile-mib-bundle` command. This works just fine if all SNMP devices NSO manages are of the same type, and implement the same set of MIBs. But if NSO is configured to manage many different SNMP devices, some other mechanism is needed.
-
-In NSO, this problem is solved by using MIB groups. MIB group is a named collection of MIB module names. A managed SNMP device can refer to one or more MIB groups. For example, below two MIB groups are defined:
-
-```cli
-admin@ncs# show running-config devices mib-group
-
-devices mib-group basic
- mib-module [ BASIC-CONFIG-MIB BASIC-TC ]
-!
-devices mib-group snmp
- mib-module [ SNMP* ]
-!
-```
-
-The wildcard `*` can be used only at the end of a string; it is thus used to define a prefix of the MIB module name. So the string `SNMP*` matches all loaded standard SNMP modules, such as SNMPv2-MIB, SNMP-TARGET-MIB, etc.
-
-An SNMP device can then be configured to refer to one or more of the MIB groups:
-
-```cli
-admin@ncs# show running-config devices device r3 device-type snmp
-
-devices device r3
- device-type snmp version v3
- device-type snmp snmp-authgroup default
- device-type snmp mib-group [ basic snmp ]
-!
-```
-
-## Annotations for MIB Objects
-
-Most annotations for MIB objects are used to instruct NSO on how to split a large transaction into suitable SNMP SET requests. This step is not necessary for a default integration. But when for example ordering dependencies in the MIB is discovered it is better to add this as annotations and let NSO handle the ordering rather than leaving it to the CLI user or Java programmer.
-
-In some cases, NSO can automatically understand when rows in a table must be created or deleted before rows in some other table. Specifically, NSO understands that if table B has an INDEX object in table A (i.e., B sparsely augments A), then rows in table B must be created after rows in table B, and vice versa for deletions. NSO also understands that if table B AUGMENTS table A, then a row in table A must be created before any column in B is modified.
-
-However, in some MIBs, table dependencies cannot be detected automatically. In this case, these tables must be annotated with a `sort-priority`. By default, all rows have sort-priority 0. If table A has a lower sort priority than table B, then rows in table A are created before rows in table B.
-
-In some tables, existing rows cannot be modified unless the row is inactivated. Once inactive, the row can be modified and then activated again. Unfortunately, there is no formal way to declare this is SMIv2, so these tables must be annotated with two statements; `ned-set-before-row-modification` and `ned-modification-dependent`. The former is used to instruct NSO which column and which value is used to inactivate a row, and the latter is used on each column that requires the row to be inactivated before modification. `ned-modification-dependent` can be used in the same table as `ned-set-before-row-modification`, or in a table that augments or sparsely augments the table with `ned-set-before-row-modification`.
-
-By default, NSO treats a writable SMIv2 object as configuration, except if the object is of type RowStatus. Any writable object that does not represent configuration must be listed in a MIB annotation file when the MIB is compiled, with the "operational" modifier.
-
-When NSO retrieves data from an SNMP device, e.g., when doing a `sync from-device`, it uses the GET-NEXT request to scan the table for available rows. When doing the GET-NEXT, NSO must ask for an accessible column. If the row has a column of type RowStatus, NSO uses this column. Otherwise, if one of the INDEX objects is accessible, it uses this object. Otherwise, if the table has been annotated with `ned-accessible-column`, this column is used. And, as a last resort, NSO does not indicate any column in the first GET-NEXT request, and uses the column returned from the device in subsequent requests. If the table has "holes" for this column, i.e., the column is not instantiated in all rows, NSO will not detect those rows.
-
-NSO can automatically create and delete table rows for tables that use the RowStatus TEXTUAL-CONVENTION, defined in RFC 2580.
-
-It is pretty common to mix configuration objects with non-configuration objects in MIBs. Specifically, it is quite common that rows are created automatically by the device, but then some columns in the row are treated as configuration data. In this case, the application programmer must tell NSO to sync from the device before attempting to modify the configuration columns, to let NSO learn which rows exist on the device.
-
-Some SNMP agents require a certain order of row deletions and creations. By default, the SNMP NED sends all creates before deletes. The annotation `ned-delete-before-create` can be used on a table entry to send row deletions before row creations, for that table.
-
-Sometimes rows in some SNMP agents cannot be modified once created. Such rows can be marked with the annotation `ned-recreate-when-modified`. This makes the SNMP NED to first delete the row, and then immediately recreate it with the new values.
-
-A good starting point for understanding annotations is to look at the example in the [examples.ncs/device-management/snmp-ned](https://github.com/NSO-developer/nso-examples/tree/6.6/device-management/snmp-ned) directory. The BASIC-CONFIG-MIB mib has a table where rows can be modified if the `bscActAdminState` is set to locked. To have NSO do this automatically when modifying entries rather than leaving it to users an annotation file can be created. See the `BASIC-CONFIG-MIB.miba` which contains the following:
-
-```
-## NCS Annotation module for BASIC-CONFIG-MIB
-
-bscActAdminState ned-set-before-row-modification = locked
-bscActFlow ned-modification-dependent
-```
-
-This tells NSO that before modifying the `bscActFlow` column set the `bscActAdminState` to locked and restore the previous value after committing the set operation.
-
-All MIB annotations for a particular MIB are written to a file with the file suffix `.miba`. See [mib\_annotations(5)](../../../resources/man/mib_annotations.5.md) in manual pages for details.
-
-Make sure that the MIB annotation file is put into the directory where all the MIB files are which is given as input to the `ncsc --ncs-compile-mib-bundle` command
-
-## Using the SNMP NED
-
-NSO can manage SNMP devices within transactions, a transaction can span Cisco devices, NETCONF devices, and SNMP devices. If a transaction fails NSO will generate the reverse operation to the SNMP device.
-
-The basic features of the SNMP will be illustrated below by using the [examples.ncs/device-management/snmp-ned](https://github.com/NSO-developer/nso-examples/tree/6.6/device-management/snmp-ned) example. First, try to connect to all SNMP devices:
-
-```cli
-admin@ncs# devices connect
-
-connect-result {
- device r1
- result true
- info (admin) Connected to r1 - 127.0.0.1:2501
-}
-connect-result {
- device r2
- result true
- info (admin) Connected to r2 - 127.0.0.1:2502
-}
-connect-result {
- device r3
- result true
- info (admin) Connected to r3 - 127.0.0.1:2503
-}
-```
-
-When NSO executes the connect request for SNMP devices it performs a get-next request with 1.1 as var-bind. When working with the SNMP NED it is helpful to turn on the NED tracing:
-
-```bash
-$ ncs_cli -C -u admin
-```
-
-```
-admin@ncs config
-```
-
-```cli
-admin@ncs(config)# devices global-settings trace pretty trace-dir .
-```
-
-```cli
-admin@ncs(config)# commit
-```
-
-```
-Commit complete.
-```
-
-This creates a trace file named `ned-devicename.trace`. The trace for the NCS `connect` action looks like:
-
-```bash
-$ more ned-r1.trace
-get-next-request reqid=2
- 1.1
-get-response reqid=2
- 1.3.6.1.2.1.1.1.0=Tail-f ConfD agent - 1
-```
-
-When looking at SNMP trace files it is useful to have the OBJECT-DESCRIPTOR rather than the OBJECT-IDENTIFIER. To do this, pipe the trace file to the `smixlate` tool:
-
-```bash
-$ more ned-r1.trace | smixlate $NCS_DIR/src/ncs/snmp/mibs/SNMPv2-MIB.mib
-
-get-next-request reqid=2
- 1.1
-get-response reqid=2
- sysDescr.0=Tail-f ConfD agent - 1
-```
-
-You can access the data in the SNMP systems directly (read-only and read-write objects):
-
-```cli
-admin@ncs# show devices device live-status
-
-ncs live-device r1
- live-status SNMPv2-MIB system sysDescr "Tail-f ConfD agent - 1"
- live-status SNMPv2-MIB system sysObjectID 1.3.6.1.4.1.24961
- live-status SNMPv2-MIB system sysUpTime 596197
- live-status SNMPv2-MIB system sysContact ""
- live-status SNMPv2-MIB system sysName ""
-...
-```
-
-NSO can synchronize all writable objects into CDB:
-
-```cli
-admin@ncs# devices sync-from
-sync-result {
- device r1
- result true
-...
-```
-
-```cli
-admin@ncs# show running-config devices device r1 config r:SNMPv2-MIB
-
-devices device r1
- config
- system
- sysContact ""
- sysName ""
- sysLocation ""
- !
- snmp
- snmpEnableAuthenTraps disabled;
- !
-```
-
-All the standard features of NSO with transactions and roll-backs will work with SNMP devices. The sequence below shows how to enable authentication traps for all devices as one transaction. If any device fails, NSO will automatically roll back the others. At the end of the CLI sequence a manual rollback is shown:
-
-```cli
-admin@ncs# config
-```
-
-
-
-```cli
-admin@ncs(config)# commit
-```
-
-```
-Commit complete.
-```
-
-```cli
-admin@ncs(config)# top rollback-files apply-rollback-file id 0
-```
-
-```cli
-admin@ncs(config)# commit dry-run outformat cli
-```
-
-```
-cli devices {
- device r1 {
- config {
- r:SNMPv2-MIB {
- snmp {
- - snmpEnableAuthenTraps enabled;
- + snmpEnableAuthenTraps disabled;
- }
- }
- }
- }
- device r2 {
- config {
- r:SNMPv2-MIB {
- snmp {
- - snmpEnableAuthenTraps enabled;
- + snmpEnableAuthenTraps disabled;
- }
- }
- }
- }
- device r3 {
- config {
- r:SNMPv2-MIB {
- snmp {
- - snmpEnableAuthenTraps enabled;
- + snmpEnableAuthenTraps disabled;
- }
- }
- }
- }
- }
-```
-
-```cli
-admin@ncs(config)# commit
-```
-
-```
-Commit complete.
-```
diff --git a/development/advanced-development/developing-packages.md b/development/advanced-development/developing-packages.md
deleted file mode 100644
index f7a78d9a..00000000
--- a/development/advanced-development/developing-packages.md
+++ /dev/null
@@ -1,1236 +0,0 @@
----
-description: Develop service packages to run user code.
----
-
-# Developing Packages
-
-When setting up an application project, there are several things to think about. A service package needs a service model, NSO configuration files, and mapping code. Similarly, NED packages need YANG files and NED code. We can either copy an existing example and modify that, or we can use the tool `ncs-make-package` to create an empty skeleton for a package for us. The `ncs-make-package` tool provides a good starting point for a development project. Depending on the type of package, we use `ncs-make-package` to set up a working development structure.
-
-As explained in [NSO Packages](../core-concepts/packages.md), NSO runs all user Java code and also loads all data models through an NSO package. Thus, a development project is the same as developing a package. Testing and running the package is done by putting the package in the NSO load-path and running NSO.
-
-There are different kinds of packages; NED packages, service packages, etc. Regardless of package type, the structure of the package as well as the deployment of the package into NSO is the same. The script `ncs-make-package` creates the following for us:
-
-* A Makefile to build the source code of the package. The package contains source code and needs to be built.
-* If it's a NED package, a `netsim` directory that is used by the `ncs-netsim` tool to simulate a network of devices.
-* If it is a service package, skeleton YANG and Java files that can be modified are generated.
-
-In this section, we will develop an MPLS service for a network of provider edge routers (PE) and customer equipment routers (CE). The assumption is that the routers speak NETCONF and that we have proper YANG modules for the two types of routers. The techniques described here work equally well for devices that speak other protocols than NETCONF, such as Cisco CLI or SNMP.
-
-We first want to create a simulation environment where ConfD is used as a NETCONF server to simulate the routers in our network. We plan to create a network that looks like this:
-
-
MPLS Network
-
-To create the simulation network, the first thing we need to do is create NSO packages for the two router models. The packages are also exactly what NSO needs to manage the routers.
-
-Assume that the yang files for the PE routers reside in `./pe-yang-files` and the YANG files for the CE routers reside in `./ce-yang-files` The `ncs-make-package` tool is used to create two device packages, one called `pe` and the other `ce`.
-
-```bash
- $ ncs-make-package --netconf-ned ./pe-yang-files pe
- $ ncs-make-package --netconf-ned ./ce-yang-files ce
- $ (cd pe/src; make)
- $ (cd pe/src; make)
-```
-
-At this point, we can use the `ncs-netsim` tool to create a simulation network. `ncs-netsim` will use the Tail-f ConfD daemon as a NETCONF server to simulate the managed devices, all running on localhost.
-
-```bash
- $ ncs-netsim create-network ./ce 5 ce create-network ./pe 3 pe
-```
-
-The above command creates a network with 8 routers, 5 running the YANG models for a CE router and 3 running a YANG model for the PE routers. `ncs-netsim` can be used to stop, start, and manipulate this network. For example:
-
-```bash
-$ ncs-netsim start
-DEVICE ce0 OK STARTED
-DEVICE ce1 OK STARTED
-DEVICE ce2 OK STARTED
-DEVICE ce3 OK STARTED
-DEVICE ce4 OK STARTED
-DEVICE pe0 OK STARTED
-DEVICE pe1 OK STARTED
-DEVICE pe2 OK STARTED
-```
-
-## `ncs-setup`
-
-In the previous section, we described how to use `ncs-make-package` and `ncs-netsim` to set up a simulation network. Now, we want to use NCS to control and manage precisely the simulated network. We can use the `ncs-setup` tool setup a directory suitable for this. `ncs-setup` has a flag to set up NSO initialization files so that all devices in a `ncs-netsim` network are added as managed devices to NSO. If we do:
-
-```bash
- $ ncs-setup --netsim-dir ./netsim --dest NCS;
- $ cd NCS
- $ cat README.ncs
- .......
- $ ncs
-```
-
-The above commands, db, log, etc., directories and also create an NSO XML initialization file in `./NCS/ncs-cdb/netsim_devices_init.xml`. The `init` file is important; it is created from the content of the netsim directory and it contains the IP address, port, auth credentials, and NED type for all the devices in the netsim environment. There is a dependency order between `ncs-setup` and `ncs-netsim` since `ncs-setup` creates the XML init file based on the contents in the netsim environment; therefore we must run the `ncs-netsim create-network` command before we execute the `ncs-setup` command. Once `ncs-setup` has been run, and the `init` XML file has been generated, it is possible to manually edit that file.
-
-If we start the NSO CLI, we have for example :
-
-```bash
-$ ncs_cli -u admin
-admin connected from 127.0.0.1 using console on zoe
-admin@zoe> show configuration devices device ce0
-address 127.0.0.1;
-port 12022;
-authgroup default;
-device-type {
- netconf;
-}
-state {
- admin-state unlocked;
-}
-```
-
-## The netsim Part of a NED Package
-
-If we take a look at the directory structure of the generated NETCONF NED packages, we have in `./ce`
-
-```
-|----package-meta-data.xml
-|----private-jar
-|----shared-jar
-|----netsim
-|----|----start.sh
-|----|----confd.conf.netsim
-|----|----Makefile
-|----src
-|----|----ncsc-out
-|----|----Makefile
-|----|----yang
-|----|----|----interfaces.yang
-|----|----java
-|----|----|----build.xml
-|----|----|----src
-|----|----|----|----com
-|----|----|----|----|----example
-|----|----|----|----|----|----ce
-|----|----|----|----|----|----|----namespaces
-|----doc
-|----load-dir
-```
-
-It is a NED package, and it has a directory called `netsim` at the top. This indicates to the `ncs-netsim` tool that `ncs-netsim` can create simulation networks that contain devices running the YANG models from this package. This section describes the `netsim` directory and how to modify it. `ncs-netsim` uses ConfD to simulate network elements, and to fully understand how to modify a generated `netsim` directory, some knowledge of how ConfD operates may be required.
-
-The `netsim` directory contains three files:
-
-* `confd.conf.netsim` is a configuration file for the ConfD instances. The file will be `/bin/sed` substituted where the following list of variables will be substituted for the actual value for that ConfD instance:
- 1. `%IPC_PORT%` for `/confdConfig/confdIpcAddress/port`
- 2. `%NETCONF_SSH_PORT%` - for `/confdConfig/netconf/transport/ssh/port`
- 3. `%NETCONF_TCP_PORT%` - for `/confdConfig/netconf/transport/tcp/port`
- 4. `%CLI_SSH_PORT%` - for `/confdConfig/cli/ssh/port`
- 5. `%SNMP_PORT%` - for `/confdConfig/snmpAgent/port`
- 6. `%NAME%` - for the name of the ConfD instance.
- 7. `%COUNTER%` - for the number of the ConfD instance
-* The `Makefile` should compile the YANG files so that ConfD can run them. The `Makefile` should also have an `install` target that installs all files required for ConfD to run one instance of a simulated network element. This is typically all `fxs` files.
-* An optional `start.sh` file where additional programs can be started. A good example of a package where the netsim component contains some additional C programs is the `webserver` package in [examples.ncs/service-management/website-service](https://github.com/NSO-developer/nso-examples/tree/6.6/service-management/website-service) example.
-
-Remember the picture of the network we wish to work with, there the routers, PE and CE, have an IP address and some additional data. So far here, we have generated a simulated network with YANG models. The routers in our simulated network have no data in them, we can log in to one of the routers to verify that:
-
-```bash
-$ ncs-netsim cli pe0
-admin connected from 127.0.0.1 using console on zoe
-admin@zoe> show configuration interface
-No entries found.
-[ok][2012-08-21 16:52:19]
-admin@zoe> exit
-```
-
-The ConfD devices in our simulated network all have a Juniper CLI engine, thus we can, using the command `ncs-netsim cli [devicename]`, log in to an individual router.
-
-To achieve this, we need to have some additional XML initializing files for the ConfD instances. It is the responsibility of the `install` target in the netsim Makefile to ensure that each ConfD instance gets initialized with the proper init data. In the NSO example collection, the [examples.ncs/service-management/mpls-vpn-java](https://github.com/NSO-developer/nso-examples/tree/6.6/service-management/mpls-vpn-java) and [examples.ncs/service-management/mpls-vpn-python](https://github.com/NSO-developer/nso-examples/tree/6.6/service-management/mpls-vpn-python) examples contain the two above-mentioned PE and CE packages but modified, so that the network elements in the simulated network get initialized properly.
-
-If we run that example in the NSO example collection we see:
-
-```bash
- $ cd $NCS_DIR/examples.ncs/service-management/mpls-vpn-java
- $ make all
- ....
- $ ncs-netsim start
- .....
- $ ncs
- $ ncs_cli -u admin
-
-admin connected from 127.0.0.1 using console on zoe
-admin@zoe> show status packages package pe
-package-version 1.0;
-description "Generated netconf package";
-ncs-min-version 2.0;
-component pe {
- ned {
- netconf;
- device {
- vendor "Example Inc.";
- }
- }
-}
-oper-status {
- up;
-}
-[ok][2012-08-22 14:45:30]
-admin@zoe> request devices sync-from
-sync-result {
- device ce0
- result true
-}
-sync-result {
- device ce1
- result true
-}
-sync-result {
- .......
-admin@zoe> show configuration devices device pe0 config if:interface
-interface eth2 {
- ip 10.0.12.9;
- mask 255.255.255.252;
-}
-interface eth3 {
- ip 10.0.17.13;
- mask 255.255.255.252;
-}
-interface lo {
- ip 10.10.10.1;
- mask 255.255.0.0;
-}
-```
-
-A fully simulated router network loaded into NSO, with ConfD simulating the 7 routers.
-
-## Plug-and-play Scripting
-
-With the scripting mechanism, an end-user can add new functionality to NSO in a plug-and-play-like manner. See [Plug-and-play Scripting](../../operation-and-usage/operations/plug-and-play-scripting.md) about the scripting concept in general. It is also possible for a developer of an NSO package to enclose scripts in the package.
-
-Scripts defined in an NSO package work pretty much as system-level scripts configured with the `/ncs-config/scripts/dir` configuration parameter. The difference is that the location of the scripts is predefined. The scripts directory must be named `scripts` and must be located in the top directory of the package.
-
-In this complete example [examples.ncs/sdk-api/scripting](https://github.com/NSO-developer/nso-examples/tree/6.6/sdk-api/scripting), there is a `README` file and a simple post-commit script `packages/scripting/scripts/post-commit/show_diff.sh` as well as a simple command script `packages/scripting/scripts/command/echo.sh`.
-
-## Creating a Service Package
-
-So far we have only talked about packages that describe a managed device, i.e., `ned` packages. There are also `callback`, `application`, and `service` packages. A service package is a package with some YANG code that models an NSO service together with Java code that implements the service. See [Implementing Services](../core-concepts/implementing-services.md).
-
-We can generate a service package skeleton, using `ncs-make-package`, as:
-
-```bash
- $ ncs-make-package --service-skeleton java myrfs
- $ cd test/src; make
-```
-
-Make sure that the package is part of the load path, and we can then create test service instances that do nothing.
-
-```
-admin@zoe> show status packages package myrfs
-package-version 1.0;
-description "Skeleton for a resource facing service - RFS";
-ncs-min-version 2.0;
-component RFSSkeleton {
- callback {
- java-class-name [ com.example.myrfs.myrfs ];
- }
-}
-oper-status {
- up;
-}
-[ok][2012-08-22 15:30:13]
-admin@zoe> configure
-Entering configuration mode private
-[ok][2012-08-22 15:32:46]
-
-[edit]
-admin@zoe% set services myrfs s1 dummy 3.4.5.6
-[ok][2012-08-22 15:32:56]
-```
-
-The `ncs-make-package` will generate skeleton files for our service models and for our service logic. The package is fully buildable and runnable even though the service models are empty. Both CLI and Webui can be run. In addition to this, we also have a simulated environment with ConfD devices configured with YANG modules.
-
-Calling `ncs-make-package` with the arguments above will create a service skeleton that is placed in the root in the generated service model. However, services can be augmented anywhere or can be located in any YANG module. This can be controlled by giving an argument `--augment NAME` where `NAME` is the path to where the service should be augmented, or in the case of putting the service as a root container in the service YANG this can be controlled by giving the argument `--root-container NAME`.
-
-Services created using `ncs-make-package` will be of type `list`. However, it is possible to have services that are of type `container` instead. A container service needs to be specified as a _presence_ container.
-
-## Java Service Implementation
-
-The service implementation logic of a service can be expressed using the Java language. For each such service, a Java class is created. This class should implement the `create()` callback method from the `ServiceCallback` interface. This method will be called to implement the service-to-device mapping logic for the service instance.
-
-We declare in the component for the package, that we have a callback component. In the `package-meta-data.xml` for the generated package, we have:
-
-```xml
-
- RFSSkeleton
-
- com.example.myrfs.myrfs
-
-
-```
-
-When the package is loaded, the NSO Java VM will load the jar files for the package, and register the defined class as a callback class. When the user creates a service of this type, the `create()` method will be called.
-
-## Developing our First Service Application
-
-In the following sections, we are going to show how to write a service application through several examples. The purpose of these examples is to illustrate the concepts described in previous chapters.
-
-* Service Model - a model of the service you want to provide.
-* Service Validation Logic - a set of validation rules incorporated into your model.
-* Service Logic - a Java class mapping the service model operations onto the device layer.
-
-If we take a look at the Java code in the service generated by `ncs-make-package`, first we have the `create()` which takes four parameters. The `ServiceContext` instance is a container for the current service transaction, with this e.g. the transaction timeout can be controlled. The container `service` is a `NavuContainer` holding a read/write reference to the path in the instance tree containing the current service instance. From this point, you can start accessing all nodes contained within the created service. The `root` container is a `NavuContainer` holding a reference to the NSO root. From here you can access the whole data model of the NSO. The `opaque` parameter contains a `java.util.Properties` object instance. This object may be used to transfer additional information between consecutive calls to the create callback. It is always null in the first callback method when a service is first created. This Properties object can be updated (or created if null) but should always be returned.
-
-{% code title="Example: Resource Facing Service Implementation" %}
-```java
- @ServiceCallback(servicePoint="myrfsspnt",
- callType=ServiceCBType.CREATE)
- public Properties create(ServiceContext context,
- NavuNode service,
- NavuNode root,
- Properties opaque)
- throws DpCallbackException {
- String servicePath = null;
- try {
- servicePath = service.getKeyPath();
-
- //Now get the single leaf we have in the service instance
- // NavuLeaf sServerLeaf = service.leaf("dummy");
-
- //..and its value (which is a ipv4-address )
- // ConfIPv4 ip = (ConfIPv4)sServerLeaf.value();
-
- //Get the list of all managed devices.
- NavuList managedDevices = root.container("devices").list("device");
-
- // iterate through all manage devices
- for(NavuContainer deviceContainer : managedDevices.elements()){
-
- // here we have the opportunity to do something with the
- // ConfIPv4 ip value from the service instance,
- // assume the device model has a path /xyz/ip, we could
- // deviceContainer.container("config").
- // .container("xyz").leaf(ip).set(ip);
- //
- // remember to use NAVU sharedCreate() instead of
- // NAVU create() when creating structures that may be
- // shared between multiple service instances
- }
- } catch (NavuException e) {
- throw new DpCallbackException("Cannot create service " +
- servicePath, e);
- }
- return opaque;
- }
-```
-{% endcode %}
-
-The opaque object is extremely useful for passing information between different invocations of the `create()` method. The returned `Properties` object instance is stored persistently. If the create method computes something on its first invocation, it can return that computation to have it passed in as a parameter on the second invocation.
-
-This is crucial to understand, the Mapping Logic fastmap mode relies on the fact that a modification of an existing service instance can be realized as a full deletion of what the service instance created when the service instance was first created, followed by yet another create, this time with slightly different parameters. The NSO transaction engine will then compute the minimal difference and send southbound to all involved managed devices. Thus a good service instance `create()` method will - when being modified - recreate exactly the same structures it created the first time.
-
-The best way to debug this and to ensure that a modification of a service instance really only sends the minimal NETCONF diff to the southbound managed devices, is to turn on NETCONF trace in the NSO, modify a service instance, and inspect the XML sent to the managed devices. A badly behaving `create()` method will incur large reconfigurations of the managed devices, possibly leading to traffic interruptions.
-
-It is highly recommended to also implement a `selftest()` action in conjunction with a service. The purpose of the `selftest()` action is to trigger a test of the service. The `ncs-make-package` tool creates an `selftest()` action that takes no input parameters and has two output parameters.
-
-{% code title="Example: Selftest yang Definition" %}
-```
- tailf:action self-test {
- tailf:info "Perform self-test of the service";
- tailf:actionpoint myrfsselftest;
- output {
- leaf success {
- type boolean;
- }
- leaf message {
- type string;
- description
- "Free format message.";
- }
- }
-```
-{% endcode %}
-
-The `selftest()` implementation is expected to do some diagnosis of the service. This can possibly include the use of testing equipment or probes.
-
-{% code title="Example: Selftest Action" %}
-```java
- /**
- * Init method for selftest action
- */
- @ActionCallback(callPoint="myrfsselftest", callType=ActionCBType.INIT)
- public void init(DpActionTrans trans) throws DpCallbackException {
- }
-
- /**
- * Selftest action implementation for service
- */
- @ActionCallback(callPoint="myrfsselftest", callType=ActionCBType.ACTION)
- public ConfXMLParam[] selftest(DpActionTrans trans, ConfTag name,
- ConfObject[] kp, ConfXMLParam[] params)
- throws DpCallbackException {
- try {
- // Refer to the service yang model prefix
- String nsPrefix = "myrfs";
- // Get the service instance key
- String str = ((ConfKey)kp[0]).toString();
-
- return new ConfXMLParam[] {
- new ConfXMLParamValue(nsPrefix, "success", new ConfBool(true)),
- new ConfXMLParamValue(nsPrefix, "message", new ConfBuf(str))};
-
- } catch (Exception e) {
- throw new DpCallbackException("selftest failed", e);
- }
- }
-```
-{% endcode %}
-
-## Tracing Within the NSO Service Manager
-
-The NSO Java VM logging functionality is provided using LOG4J. The logging is composed of a configuration file (`log4j2.xml`) where static settings are made i.e. all settings that could be done for LOG4J (see [LOG4J](https://logging.apache.org/log4j/2.x/) for more comprehensive log settings). There are also dynamically configurable log settings under `/java-vm/java-logging`.
-
-When we start the NSO Java VM in `main()` the `log4j2.xml` log file is parsed by the LOG4J framework and it applies the static settings to the NSO Java VM environment. The file is searched for in the Java CLASSPATH.
-
-NSO Java VM starts several internal processes or threads. One of these threads executes a service called `NcsLogger` which handles the dynamic configurations of the logging framework. When `NcsLogger` starts, it initially reads all the configurations from `/java-vm/java-logging` and applies them, thus overwriting settings that were previously parsed by the LOG4J framework.
-
-After it has applied the changes from the configuration it starts to listen to changes that are made under `/java-vm/java-logging`.
-
-The LOG4J framework has 8 verbosity levels: `ALL`,`DEBUG`,`ERROR`,`FATAL`,`INFO`,`OFF`,`TRACE`, and `WARN`. They have the following relations: `ALL` > `TRACE` > `DEBUG` > `INFO` > `WARN` > `ERROR` > `FATAL` > `OFF`. This means that the highest verbosity that we could have is the level `ALL` and the lowest is no traces at all, i.e., `OFF`. There are corresponding enumerations for each LOG4J verbosity level in `tailf-ncs.yang`, thus the `NcsLogger` does the mapping between the enumeration type: `log-level-type` and the LOG4J verbosity levels.
-
-{% code title="Example: tailf-ncs-java-vm.yang" %}
-```
- typedef log-level-type {
- type enumeration {
- enum level-all {
- value 1;
- }
- enum level-debug {
- value 2;
- }
- enum level-error {
- value 3;
- }
- enum level-fatal {
- value 4;
- }
- enum level-info {
- value 5;
- }
- enum level-off {
- value 6;
- }
- enum level-trace {
- value 7;
- }
- enum level-warn {
- value 8;
- }
- }
- description
- "Levels of logging for Java packages in log4j.";
- }
-
- ....
-
- container java-vm {
- ....
- container java-logging {
- tailf:info "Configure Java Logging";
- list logger {
- tailf:info "List of loggers";
- key "logger-name";
- description
- "Each entry in this list holds one representation of a logger with
- a specific level defined by log-level-type. The logger-name
- is the name of a Java package. logger-name can thus be for
- example com.tailf.maapi, or com.tailf etc.";
-
- leaf logger-name {
- tailf:info "The name of the Java package";
- type string;
- mandatory true;
- description
- "The name of the Java package for which this logger
- entry applies.";
- }
- leaf level {
- tailf:info "Log-level for this logger";
- type log-level-type;
- mandatory true;
- description
- "Corresponding log-level for a specific logger.";
- }
- }
- }
-```
-{% endcode %}
-
-To change a verbosity level one needs to create a logger. A logger is something that controls the logging of certain parts of the NSO Java API.
-
-The loggers in the system are hierarchically structured which means that there is one root logger that always exists. All descendants of the root logger inherit their settings from the root logger if the descendant logger doesn't overwrite its settings explicitly.
-
-The LOG4J loggers are mapped to the package level in NSO Java API so the root logger that exits has a direct descendant which is the package: `com` and it has in turn a descendant `com.tailf`.
-
-The `com.tailf` logger has a direct descendant that corresponds to every package in the system for example: `com.tailf.cdb, com.tailf.maapi` etc.
-
-As in the default case, one could configure a logger in the static settings that is in a `log4j2.properties` file this would mean that we need to explicitly restart the NSO Java VM,or one could alternatively configure a logger dynamically if an NSO restart is not desired.
-
-Recall that if a logger is not configured explicitly then it will inherit its settings from its predecessors. To overwrite a logger setting we create a logger in NSO.
-
-To create a logger, for example, let's say that one uses Maapi API to read and write configuration changes in NSO. We want to show all traces including `INFO` level traces. To enable INFO traces for Maapi classes (located in the package `com.tailf.maapi`) during runtime we start for example a CLI session and create a logger called c`om.tailf.maapi`.
-
-```cli
-ncs@admin% set java-vm java-logging logger com.tailf.maapi level level-info
-[ok][2010-11-05 15:11:47]
-ncs@admin% commit
-Commit complete.
-```
-
-When we commit our changes to CDB the NcsLogger will notice that a change has been made under `/java-vm/java-logging`, it will then apply the logging settings to the logger `com.tailf.maapi` that we just created. We explicitly set the `INFO` level to that logger. All the descendants from `com.tailf.maapi` will automatically inherit their settings from that logger.
-
-So where do the traces go? The default configuration (in `log4j2.properties`): `appender.dest1.type=Console` the LOG4J framework forwards all traces to stdout/stderr.
-
-In NSO, all `stdout`/`stderr` goes first through the service manager. The service manager has a configuration under `/java-vm/stdout-capture` that controls where the `stdout`/`stderr` will end up.
-
-The default setting is in a file called `./ncs-java-vm.log`.
-
-{% code title="Example: stdout Capture" %}
-```yang
- container stdout-capture {
- tailf:info "Capture stdout and stderr";
- description
- "Capture stdout and stderr from the Java VM.
-
- Only applicable if auto-start is 'true'.";
- leaf enabled {
- tailf:info "Enable stdout and stderr capture";
- type boolean;
- default true;
- }
- leaf file {
- tailf:info "Write Java VM output to file";
- type string;
- default "./ncs-java-vm.log";
- description
- "Write Java VM output to filename.";
- }
- leaf stdout {
- tailf:info "Write output to stdout";
- type empty;
- description
- "If present write output to stdout, useful together
- with the --foreground flag to ncs.";
- }
- }
-```
-{% endcode %}
-
-It is important to consider that when creating a logger (in this case `com.tailf.maapi`) the name of the logger has to be an existing package known by NSO classloader.
-
-One could also create a logger named `com.tailf` with some desired level. This would set all packages (`com.tailf.*`) to the same level. A common usage is to set `com.tailf` to level `INFO` which would set all traces, including `INFO` from all packages to level `INFO`.
-
-If one would like to turn off all available traces in the system (quiet mode), then configure `com.tailf` or (`com`) to level `OFF`.
-
-There are `INFO` level messages in all parts of the NSO Java API. `ERROR` levels when an exception occurs and some warning messages (level `WARN`) for some places in packages.
-
-There are also protocol traces between the Java API and NSO which could be enabled if we create a logger `com.tailf.conf` with `DEBUG` trace level.
-
-## Controlling Error Messages Info Level from Java
-
-When processing in the `java-vm` fails, the exception error message is reported back to NCS. This can be more or less informative depending on how elaborate the message is in the thrown exception. Also, the exception can be wrapped one or several times with the original exception indicated as the root cause of the wrapped exception.
-
-In debugging and error reporting, these root cause messages can be valuable to understand what actually happens in the Java code. On the other hand, in normal operations, just a top-level message without too many details is preferred. The exceptions are also always logged in the `java-vm` log but if this log is large it can be troublesome to correlate a certain exception to a specific action in NCS. For this reason, it is possible to configure the level of details shown by NCS for an `java-vm` exception. The leaf `/ncs:java-vm/exception-error-message/verbosity` takes one of three values:
-
-* `standard`: Show the message from the top exception. This is the default.
-* `verbose`: Show all messages for the chain of cause exceptions, if any.
-* `trace`: Show messages for the chain of cause exceptions with exception class and the trace for the bottom root cause.
-
-Here is an example of how this can be used. In the [examples.ncs/service-management/website-service](https://github.com/NSO-developer/nso-examples/tree/6.6/service-management/website-service) example, we try to create a service without the necessary pre-preparations:
-
-{% code title="Example: Setting Error Message Verbosity" %}
-```cli
-admin@ncs% set services web-site s1 ip 1.2.3.4 port 1111 url x.se
-[ok][2013-03-25 10:46:46]
-
-[edit]
-admin@ncs% commit
-Aborted: Service create failed
-[error][2013-03-25 10:46:48]
-
-This is a very generic error message with does not describe what really
-happens in the java code. Here the java-vm log has to be analyzed to find
-the problem. However, with this cli session open we can from another cli
-set the error reporting level to trace:
-
-$ ncs_cli -u admin
-admin@ncs> configure
-admin@ncs% set java-vm exception-error-message verbosity trace
-admin@ncs% commit
-
-If we now in the original cli session issue the commit again we get the
-following error message that pinpoint the problem in the code:
-
-admin@ncs% commit
-Aborted: [com.tailf.dp.DpCallbackException] Service create failed
-Trace : [java.lang.NullPointerException]
- com.tailf.conf.ConfKey.hashCode(ConfKey.java:145)
- java.util.HashMap.getEntry(HashMap.java:361)
- java.util.HashMap.containsKey(HashMap.java:352)
- com.tailf.navu.NavuList.refreshElem(NavuList.java:1007)
- com.tailf.navu.NavuList.elem(NavuList.java:831)
- com.example.websiteservice.websiteservice.WebSiteServiceRFS.crea...
- com.tailf.nsmux.NcsRfsDispatcher.applyStandardChange(NcsRfsDispa...
- com.tailf.nsmux.NcsRfsDispatcher.dispatch(NcsRfsDispatcher.java:...
- sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
- sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessor...
- sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethod...
- java.lang.reflect.Method.invoke(Method.java:616)
- com.tailf.dp.annotations.DataCallbackProxy.writeAll(DataCallback...
- com.tailf.dp.DpTrans.protoCallback(DpTrans.java:1357)
- com.tailf.dp.DpTrans.read(DpTrans.java:571)
- com.tailf.dp.DpTrans.run(DpTrans.java:369)
- java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExec...
- java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExe...
- java.lang.Thread.run(Thread.java:679)
- com.tailf.dp.DpThread.run(DpThread.java:44)
-[error][2013-03-25 10:47:09]
-```
-{% endcode %}
-
-## Loading Packages
-
-NSO will, at first start to take the packages found in the load path and copy these into a directory under the supervision of NSO located at `./state/package-in-use`. Later starts of NSO will not take any new copies from the packages `load-path` so changes will not take effect by default. The reason for this is that in normal operation, changing package definition as a side-effect of a restart is an unwanted behavior. Instead, these types of changes are part of an NSO installation upgrade.
-
-During package development as opposed to operations, it is usually desirable that all changes to package definitions in the package load-path take effect immediately. There are two ways to make this happen. Either start `ncs` with the `--with-reload-packages` directive:
-
-```bash
-$ ncs --with-reload-packages
-```
-
-Or, set the environment variable `NCS_RELOAD_PACKAGES`, for example like this:
-
-```bash
-$ export NCS_RELOAD_PACKAGES=true
-```
-
-It is a strong recommendation to use the `NCS_RELOAD_PACKAGES` environment variable approach since it guarantees that the packages are updated in all situations.
-
-It is also possible to request a running NSO to reload all its packages.
-
-```
-admin@iron> request packages reload
-```
-
-This request can only be performed in operational mode, and the effect is that all packages will be updated, and any change in YANG models or code will be effectuated. If any YANG models are changed an automatic CDB data upgrade will be executed. If manual (user code) data upgrades are necessary the package should contain an `upgrade` component. This `upgrade` component will be executed as a part of the package reload. See [Writing an Upgrade Package Component](../core-concepts/using-cdb.md#ncs.cdb.upgrade.comp) for information on how to develop an upgrade component.
-
-If the change in a package does not affect the data model or shared Java code, there is another command:
-
-```
-admin@iron> request packages package mypack redeploy
-```
-
-This will redeploy the private JARs in the Java VM for the Java package, restart the Python VM for the Python package, and reload the templates associated with the package. However, this command will not be sensitive to changes in the YANG models or shared JARs for the Java package.
-
-## Debugging the Service and Using Eclipse IDE
-
-By default, NCS will start the Java VM by invoking the command `$NCS_DIR/bin/ncs-start-java-vm` That script will invoke:
-
-```bash
- $ java com.tailf.ncs.NcsJVMLauncher
-```
-
-The class `NcsJVMLauncher` contains the `main()` method. The started Java VM will automatically retrieve and deploy all Java code for the packages defined in the load path in the `ncs.conf` file. No other specification than the `package-meta-data.xml` for each package is needed.
-
-In the NSO CLI, there exist several settings and actions for the NSO Java VM, if we do:
-
-```bash
-$ ncs_cli -u admin
-
-admin connected from 127.0.0.1 using console on iron.local
-admin@iron> show configuration java-vm | details
-stdout-capture {
- enabled;
- file ./logs/ncs-java-vm.log;
-}
-connect-time 30;
-initialization-time 20;
-synchronization-timeout-action log-stop;
-java-thread-pool {
- pool-config {
- cfg-core-pool-size 5;
- cfg-keep-alive-time 60;
- cfg-maximum-pool-size 256;
- }
-}
-[ok][2012-07-12 10:45:59]
-```
-
-We see some of the settings that are used to control how the NSO Java VM runs. In particular, here we're interested in `/java-vm/stdout-capture/file`
-
-The NSO daemon will, when it starts, also start the NSO Java VM, and it will capture the stdout output from the NSO Java VM and send it to the file `./logs/ncs-java-vm.log`. For more details on the Java VM settings, see the [NSO Java VM](../core-concepts/nso-virtual-machines/nso-java-vm.md).
-
-Thus if we `tail -f` that file, we get all the output from the Java code. That leads us to the first and most simple way of developing Java code. If we now:
-
-1. Edit our Java code.
-2. Recompile that code in the package, e.g `cd ./packages/myrfs/src; make`
-3. Restart the Java code, either through telling NSO to restart the entire NSO Java VM from the NSO CLI (Note, this requires an env variable `NCS_RELOAD_PACKAGES=true`):
-
- ```cli
- admin@iron% request java-vm restart
- result Started
- [ok][2012-07-12 10:57:08]
- ```
-
- \
- Or instructing NSO to just redeploy the package we're currently working on.
-
- ```cli
- admin@iron% request packages package stats redeploy
- result true
- [ok][2012-07-12 10:59:01]
- ```
-
-We can then do `tail -f logs/ncs-java-vm.log` to check for printouts and log messages. Typically there is quite a lot of data in the NSO Java VM log. It can sometimes be hard to find our own printouts and log messages. Therefore it can be convenient to use the command below which will make the relevant exception stack traces visible in the CLI.
-
-```cli
-admin@iron% set java-vm exception-error-message verbosity trace
-```
-
-It's also possible to dynamically, from the CLI control the level of logging as well as which Java packages that shall log. Say that we're interested in Maapi calls, but don't want the log cluttered with what is really NSO Java library internal calls. We can then do:
-
-```cli
- admin@iron% set java-vm java-logging logger com.tailf.ncs level level-error
- [ok][2012-07-12 11:10:50]
- admin@iron% set java-vm java-logging logger com.tailf.conf level level-error
- [ok][2012-07-12 11:11:15]
- admin@iron% commit
- Commit complete.
-```
-
-Now, considerably less log data will come. If we want these settings to always be there, even if we restart NSO from scratch with an empty database (no `.cdb` file in `./ncs-cdb`) we can save these settings as XML, and put that XML inside the `ncs-cdb` directory, that way `ncs` will use this data as initialization data on a fresh restart. We do:
-
-```bash
- $ ncs_load -F p -p /ncs:java-vm/java-logging > ./ncs-cdb/loglevels.xml
- $ ncs-setup --reset
- $ ncs
-```
-
-The `ncs-setup --reset` command stops the NSO daemon and resets NSO back to factory defaults. A restart of NSO will reinitialize NSO from all XML files found in the CDB directory.
-
-### Running the NSO Java VM Standalone
-
-It's possible to tell NSO to not start the NSO Java VM at all. This is interesting in two different scenarios. First is if want to run the NSO Java code embedded in a larger application, such as a Java Application Server (JBoss), the other is when debugging a package.
-
-First, we configure NSO to not start the NSO Java VM at all by adding the following snippet to `ncs.conf`:
-
-```xml
-
- false
-
-```
-
-Now, after a restart or a configuration reload, no Java code is running, if we do:
-
-```bash
- admin@iron> show status packages
-```
-
-We will see that the `oper-status` of the packages is `java-uninitialized`. We can also do:
-
-```bash
- admin@iron> show status java-vm
- start-status auto-start-not-enabled;
- status not-connected;
- [ok][2012-07-12 11:27:28]
-```
-
-This is expected since we've told NSO to not start the NSO Java VM. Now, we can do that manually, at the UNIX shell prompt.
-
-```bash
-$ ncs-start-java-vm
-.....
-.. all stdout from NCS Java VM
-```
-
-So, now we're in a position where we can manually stop the NSO Java VM, recompile the Java code, and restart the NSO Java VM. This development cycle works fine. However, even though we're running the NSO Java VM standalone, we can still redeploy packages from the NSO CLI to reload and restart just our Java code, (no need to restart the NSO Java VM).
-
-```bash
- admin@iron% request packages package stats redeploy
- result true
- [ok][2012-07-12 10:59:01]
-```
-
-### Using Eclipse to Debug the Package Java Code
-
-Since we can run the NSO Java VM standalone in a UNIX Shell, we can also run it inside Eclipse. If we stand in a NSO project directory, like `NCS` generated earlier in this section, we can issue the command:
-
-```bash
-$ ncs-setup --eclipse-setup
-```
-
-This will generate two files, `.classpath` and `.project`. If we add this directory to Eclipse as a **File** -> **New** -> **Java Project**, uncheck the **Use default location** and enter the directory where the `.classpath` and `.project` have been generated. We're immediately ready to run this code in Eclipse. All we need to do is to choose the `main()` routine in the `NcsJVMLauncher` class.
-
-The Eclipse debugger works now as usual, and we can at will, start and stop the Java code. One caveat here that is worth mentioning is that there are a few timeouts between NSO and the Java code that will trigger when we sit in the debugger. While developing with the Eclipse debugger and breakpoints, we typically want to disable all these timeouts.
-
-First, we have three timeouts in `ncs.conf` that matter. Copy the system `ncs.conf` and set the three values of the following to a large value. See man page [ncs.conf(5)](../../resources/man/ncs.conf.5.md) for a detailed description of what those values are.
-
-```
-/ncs-config/api/new-session-timeout
-/ncs-config/api/query-timeout
-/ncs-config/api/connect-timeout
-```
-
-If these timeouts are triggered, NSO will close all sockets to the Java VM and all bets are off.
-
-```bash
-$ cp $NCS_DIR/etc/ncs/ncs.conf .
-```
-
-Edit the file and enter the following XML entry just after the Web UI entry.
-
-```xml
-
- PT1000S
- PT1000S
- PT1000S
-
-```
-
-Now, restart NCS.
-
-We also have a few timeouts that are dynamically reconfigurable from the CLI. We do:
-
-```bash
-$ ncs_cli -u admin
-
-admin connected from 127.0.0.1 using console on iron.local
-admin@iron> configure
-Entering configuration mode private
-[ok][2012-07-12 12:54:13]
-admin@iron% set devices global-settings connect-timeout 1000
-[ok][2012-07-12 12:54:31]
-
-[edit]
-admin@iron% set devices global-settings read-timeout 1000
-[ok][2012-07-12 12:54:39]
-
-[edit]
-admin@iron% set devices global-settings write-timeout 1000
-[ok][2012-07-12 12:54:44]
-
-[edit]
-admin@iron% commit
-Commit complete.
-```
-
-Then, to save these settings so that NCS will have them again on a clean restart (no CDB files):
-
-```bash
-$ ncs_load -F p -p /ncs:devices/global-settings > ./ncs-cdb/global-settings.xml
-```
-
-### Remote Connecting with Eclipse to the NSO Java VM
-
-The Eclipse Java debugger can connect remotely to an NSO Java VM and debug that NSO Java VM This requires that the NSO Java VM has been started with some additional flags. By default, the script in `$NCS_DIR/bin/ncs-start-java-vm` is used to start the NSO Java VM. If we provide the `-d` flag, we will launch the NSO Java VM with:
-
-```
-"-Xdebug -Xrunjdwp:transport=dt_socket,address=9000,server=y,suspend=n"
-```
-
-This is what is needed to be able to remotely connect to the NSO Java VM, in the `ncs.conf` file:
-
-```xml
-
- ncs-start-java-vm -d
-
-```
-
-Now, if we in Eclipse, add a debug configuration and connect to port 9000 on localhost, we can attach the Eclipse debugger to an already running system and debug it remotely.
-
-## Working with the `ncs-project`
-
-An NSO project is a complete running NSO installation. It contains all the needed packages and the config data that is required to run the system.
-
-By using the `ncs-project` commands, the project can be populated with the necessary packages and kept updated. This can be used for encapsulating NSO demos or even a full-blown turn-key system.
-
-For a developer, the typical workflow looks like this:
-
-1. Create a new project using the `ncs-project create` command.
-2. Define what packages to use in the `project-meta-data.xml` file.
-3. Fetch any remote packages with the `ncs-project update` command.
-4. Prepare any initial data and/or config files.
-5. Run the application.
-6. Possibly export the project for somebody else to run.
-
-### Create a New Project
-
-Using the `ncs-project create` command, a new project is created. The file `project-meta-data.xml` should be updated with relevant information as will be described below. The project will also get a default `ncs.conf` configuration file that can be edited to better match different scenarios. All files and directories should be put into a version control system, such as Git.
-
-{% code title="Example: Creating a New Project" %}
-```bash
-$ ncs-project create test_project
-Creating directory: /home/developer/dev/test_project
-Using NCS 5.7 found in /home/developer/ncs_dir
-wrote project to /home/developer/dev/test_project
-```
-{% endcode %}
-
-A directory called `test_project` is created containing the files and directories of an NSO project as shown below:
-
-```
-test_project/
-|-- init_data
-|-- logs
-|-- Makefile
-|-- ncs-cdb
-|-- ncs.conf
-|-- packages
-|-- project-meta-data.xml
-|-- README.ncs
-|-- scripts
-|-- |-- command
-|-- |-- post-commit
-|-- setup.mk
-|-- state
-|-- test
-|-- |-- internal
-|-- |-- |-- lux
-|-- |-- |-- basic
-|-- |-- |-- |-- Makefile
-|-- |-- |-- |-- run.lux
-|-- |-- |-- Makefile
-|-- |-- Makefile
-|-- Makefile
-|-- pkgtest.env
-```
-
-The `Makefile` contains targets for building, starting, stopping, and cleaning the system. It also contains targets for entering the CLI as well as some useful targets for dealing with any Git packages. Study the `Makefile` to learn more.
-
-Any initial CDB data can be put in the `init_data` directory. The `Makefile` will copy any files in this directory to the `ncs-cdb` before starting NSO.
-
-There is also a test directory created with a directory structure used for automatic tests. These tests are dependent on the test tool [Lux](https://github.com/hawk/lux.git).
-
-### Project Setup
-
-To fill this project with anything meaningful, the `project-meta-data.xml` file needs to be edited.
-
-The project version number is configurable, the version we get from the `create` command is 1.0. The description should also be changed to a small text explaining what the project is intended for. Our initial content of the `project-meta-data.xml` may now look like this:
-
-{% code title="Example: Project Metadata" %}
-```xml
-
- test_project
- 1.0
- Skeleton for a NCS project
-
-
-
-
-```
-{% endcode %}
-
-For this example, let's say we have a released package: `ncs-4.1.2-cisco-ios-4.1.5.tar.gz`, a package located in a remote git repository `foo.git`, and a local package that we have developed ourselves: `mypack`. The relevant part of our `project-meta-data.xml` file would then look like this:
-
-{% code title="Example: Package Project Metadata" %}
-```xml
-
-
-
-
- cisco-ios
- file:///tmp/ncs-4.1.2-cisco-ios-4.1.5.tar.gz
-
-
-
- foo
-
- ssh://git@my-repo.com/foo.git
- stable
-
-
-
-
- mypack
-
-
-```
-{% endcode %}
-
-By specifying netsim devices in the `project-meta-data.xml` file, the necessary commands for creating the netsim configuration will be generated in the `setup.mk` file that `ncs-project update` creates. The `setup.mk` file is included in the top `Makefile`, and provides some useful make targets for creating and deleting our netsim setup.
-
-{% code title="Example: Netsim Project Metadata" %}
-```xml
-
-
- cisco-ios
- ce
- 2
-
-
-```
-{% endcode %}
-
-When done editing the `project-meta-data.xml`, run the command `ncs-project update`. Add the `-v` switch to see what the command does.
-
-{% code title="Example: NSO Project Update" %}
-```bash
- $ ncs-project update -v
- ncs-project: installing packages...
- ncs-project: found local installation of "mypack"
- ncs-project: unpacked tar file: /tmp/ncs-4.1.2-cisco-ios-4.1.5.tar.gz
- ncs-project: git clone "ssh://git@my-repo.com/foo.git" "/home/developer/dev/test_project/packages/cisco-ios"
- ncs-project: git checkout -q "stable"
- ncs-project: installing packages...ok
- ncs-project: resolving package dependencies...
- ncs-project: resolving package dependencies...ok
- ncs-project: determining build order...
- ncs-project: determining build order...ok
- ncs-project: determining ncs-min-version...
- ncs-project: determining ncs-min-version...ok
- The file 'setup.mk' will be overwritten, Continue (y/n)?
-```
-{% endcode %}
-
-Answer `yes` when asked to overwrite the `setup.mk`. After this, a new runtime directory is created with NCS and simulated devices configured. You are now ready to compile your system with: `make all`.
-
-If you have a lot of packages, all located in the same Git repository, it is convenient to specify the repository just once. This can be done by adding a `packages-store` section as shown below:
-
-{% code title="Example: Project Packages Store" %}
-```xml
-
-
- ssh://git@my-repo.com
- stable
-
-
-
-
-
- foo
-
-
-```
-{% endcode %}
-
-This means that if a package does not have a git repository defined, the repository and branch in the `packages-store` is used.
-
-{% hint style="info" %}
-If a package has specified that it is dependent on some other packages in its `package-meta-data.xml` file, `ncs-project update` will try to clone those packages from any of the specified `packages-store`. To override this behavior, specify explicitly all packages in your `project-meta-data.xml` file.
-{% endhint %}
-
-### Export
-
-When the development is done the project can be bundled together and distributed further. The `ncs-project` comes with a command, `export`used for this purpose. The `export` command creates a tarball of the required files and any extra files as specified in the `project-meta-data.xml` file.
-
-{% hint style="info" %}
-Developers are encouraged to distribute the project, either via some Source Code Management system, like Git or by exporting bundles using the export command.
-{% endhint %}
-
-When using `export`, a subset of the packages should be configured for exporting. The reason for not exporting all packages in a project is if some of the packages are used solely for testing or similar. When configuring the bundle the packages included in the bundle are leafrefs to the packages defined at the root of the model, see the example below (The NSO Project YANG model). We can also define a specific tag, commit, or branch, even a different location for the packages, different from the one used while developing. For example, we might develop against an experimental branch of a repository, but bundle with a specific release of that same repository.
-
-{% hint style="info" %}
-Bundled packages specified as of type `file://` or `url://` will not be built, they will simply be included as is by the export command.
-{% endhint %}
-
-The bundle also has a name and a list of included files. Unless another name is specified from the command line, the final compressed file will be named using the configured bundle name and project version.
-
-We create the tar-ball by using the `export` command:
-
-{% code title="Example: NSO Project Export" %}
-```bash
-$ ncs-project export
-```
-{% endcode %}
-
-There are two ways to make use of a bundle:
-
-* Together with the `ncs-project create --from-bundle=` command.
-* Extract the included packages using tar for manual installation in an NSO deployment.
-
-In the first scenario, it is possible to create an NSO project, populated with the packages from the bundle, to create a ready-to-run NSO system. The optional `init_data` part makes it possible to prepare CDB with configuration, before starting the system the very first time. The `project-meta-data.xml` file will specify all the packages as local to avoid any dangling pointers to non-accessible git repositories.
-
-The second scenario is intended for the case when you want to install the packages manually, or via a custom process, into your running NSO systems.
-
-The switch `--snapshot` will add a timestamp in the name of the created bundle file to make it clear that it is not a proper version numbered release.
-
-To import our exported project we would do an `ncs-project create` and point out where the bundle is located.
-
-{% code title="Example: NSO Project Import" %}
-```bash
-$ ncs-project create --from-bundle=test_project-1.0.tar.gz
-```
-{% endcode %}
-
-### NSO Project Manual Pages
-
-`ncs-project` has a full set of man pages that describe its usage and syntax. Below is an overview of the commands which will be explained in more detail further down below.
-
-{% code title="Example: NSO Project Man Page" %}
-```bash
-$ ncs-project --help
-
-Usage: ncs-project
-
- COMMANDS
-
- create Create a new ncs-project
-
- update Update the project with any changes in the
- project-meta-data.xml
-
- git For each git package repo: execute an arbitrary git
- command.
-
- export Export a project, including init-data and configuration.
-
- help Display the man page for
-
- OPTIONS
-
- -h, --help Show this help text.
-
- -n, --ncs-min-version Display the NCS version(s) needed
- to run this project
-
- --ncs-min-version-non-strict As -n, but include the non-matching
- NCS version(s)
-
-See manpage for ncs-project(1) for more info.
-```
-{% endcode %}
-
-### The `project-meta-data.xml` File
-
-The `project-meta-data.xml` file defines the project metadata for an NSO project according to the `$NCS_DIR/src/ncs/ncs_config/tailf-ncs-project.yang` YANG model. See the `tailf-ncs-project.yang` module where all options are described in more detail. To get an overview, use the IETF RFC 8340-based YANG tree diagram.
-
-{% code title="Example: The NSO Project YANG Model" %}
-```bash
-$ yanger -f tree tailf-ncs-project.yang
-module: tailf-ncs-project
- +--rw project-meta-data
- +--rw name string
- +--rw project-version? version
- +--rw description? string
- +--rw packages-store
- | +--rw directory* [name]
- | | +--rw name string
- | +--rw git* [repo]
- | +--rw repo string
- | +--rw (git-type)?
- | +--:(branch)
- | | +--rw branch? string
- | +--:(tag)
- | | +--rw tag? string
- | +--:(commit)
- | +--rw commit? string
- +--rw netsim
- | +--rw device* [name]
- | +--rw name -> /project-meta-data/package/name
- | +--rw prefix string
- | +--rw num-devices int32
- +--rw bundle!
- | +--rw name? string
- | +--rw includes
- | | +--rw file* [path]
- | | +--rw path string
- | +--rw package* [name]
- | +--rw name -> ../../../package/name
- | +--rw (package-location)?
- | +--:(local)
- | | +--rw local? empty
- | +--:(url)
- | | +--rw url? string
- | +--:(git)
- | +--rw git
- | +--rw repo? string
- | +--rw (git-type)?
- | +--:(branch)
- | | +--rw branch? string
- | +--:(tag)
- | | +--rw tag? string
- | +--:(commit)
- | +--rw commit? string
- +--rw package* [name]
- +--rw name string
- +--rw (package-location)?
- +--:(local)
- | +--rw local? empty
- +--:(url)
- | +--rw url? string
- +--:(git)
- +--rw git
- +--rw repo? string
- +--rw (git-type)?
- +--:(branch)
- | +--rw branch? string
- +--:(tag)
- | +--rw tag? string
- +--:(commit)
- +--rw commit? string
-```
-{% endcode %}
-
-{% code title="Example: Example Bundle project-meta-data.xml File" %}
-```xml
-
- l3vpn-demo
- 1.0
- l3vpn demo
-
-
- example_bundle
-
- my-package-1
-
-
-
-
- my-package-2
- http://localhost:9999/my-local.tar.gz
-
-
- my-package-3
-
- ssh://git@example.com/pkg/resource-manager.git
- 1.2
-
-
-
-
- my-package-1
-
-
-
- my-package-2
-
-
-
- my-package-3
-
- ssh://git@example.com/pkg/resource-manager.git
- 1.2
-
-
-
-```
-{% endcode %}
-
-Below is a list of the settings in the `tailf-ncs-project.yang` that is configured through the metadata file. A detailed description can be found in the YANG model.
-
-{% hint style="info" %}
-The order of the XML entries in a `project-meta-data.xml` must be in the same order as the model.
-{% endhint %}
-
-* `name`: Unique name of the project.
-* `project-version`: The version of the project. This is for administrative purposes only.
-* `packages-store`:
- * `directory`: Paths for package dependencies.
- * `git`
- * `repo`: Default git package repositories.
- * `branch`, `tag`, or `commit` ID.
-* `netsim`: List netsim devices used by the project to generate a proper Makefile running the `ncs-project setup` script.
- * `device`
- * `prefix`
- * `num-devices`
-* `bundle`: Information to collect files and packages to pack them in a tarball bundle.
- * `name`: tarball filename.
- * `includes`: Files to include.
- * `package`: Packages to include (leafref to the package list below).
- * `name`: Name of the package.
- * `local, url, or git`: Where to get the package. The Git option needs a `branch`, `tag`, or `commit` ID.
-* `package`: Packages used by the project.
- * `name`: Name of the package.
- * `local`, `url`, or `git`: Where to get the package. The Git option needs a `branch`, tag`,` or `commit` ID.
diff --git a/development/advanced-development/developing-services/README.md b/development/advanced-development/developing-services/README.md
deleted file mode 100644
index 6b89669f..00000000
--- a/development/advanced-development/developing-services/README.md
+++ /dev/null
@@ -1,6 +0,0 @@
----
-description: Develop services and applications in NSO.
----
-
-# Developing Services
-
diff --git a/development/advanced-development/developing-services/service-development-using-java.md b/development/advanced-development/developing-services/service-development-using-java.md
deleted file mode 100644
index 196d5bf4..00000000
--- a/development/advanced-development/developing-services/service-development-using-java.md
+++ /dev/null
@@ -1,1100 +0,0 @@
----
-description: Learn service development in Java with Examples.
----
-
-# Service Development Using Java
-
-As using Java for service development may be somewhat more involved than Python, this section provides further examples and additional tips for setting up the development environment for Java.
-
-The two examples, a simple VLAN service and a Layer 3 MPLS VPN service are more elaborate but show the same techniques as [Implementing Services](../../core-concepts/implementing-services.md).
-
-{% hint style="success" %}
-If you or your team primarily focuses on services implemented in Python, feel free to skip or only skim through this section.
-{% endhint %}
-
-## Creating a Simple VLAN Service
-
-In this example, you will create a simple VLAN service in Java. In order to illustrate the concepts, the device configuration is simplified from a networking perspective and only uses one single device type (Cisco IOS).
-
-### Overview of Steps
-
-We will first look at the following preparatory steps:
-
-1. Prepare a simulated environment of Cisco IOS devices: in this example, we start from scratch in order to illustrate the complete development process. We will not reuse any existing NSO examples.
-2. Generate a template service skeleton package: use NSO tools to generate a Java-based service skeleton package.
-3. Write and test the VLAN Service Model.
-4. Analyze the VLAN service mapping to IOS configuration.
-
-These steps are no different from defining services using templates. Next is to start playing with the Java Environment:
-
-1. Configuring the start and stop of the Java VM.
-2. First look at the Service Java Code: introduction to service mapping in Java.
-3. Developing by tailing log files.
-4. Developing using Eclipse.
-
-### Setting Up the Environment
-
-We will start by setting up a run-time environment that includes simulated Cisco IOS devices and configuration data for NSO. Make sure you have sourced the `ncsrc` file.
-
-1. Create a new directory that will contain the files for this example, such as:
-
-```bash
-$ mkdir ~/vlan-service
-$ cd ~/vlan-service
-```
-
-2. Now, let's create a simulated environment with 3 IOS devices and an NSO that is ready to run with this simulated network:
-
-```bash
-$ ncs-netsim create-network $NCS_DIR/packages/neds/cisco-ios 3 c
-$ ncs-setup --netsim-dir ./netsim/ --dest ./
-```
-
-3. Start the simulator and NSO:
-
-```bash
-$ ncs-netsim start
-DEVICE c0 OK STARTED
-DEVICE c1 OK STARTED
-DEVICE c2 OK STARTED
-$ ncs
-```
-
-4. Use the Cisco CLI towards one of the devices:
-
-```bash
-$ ncs-netsim cli-i c0
-admin connected from 127.0.0.1 using console on ncs
-c0> enable
-c0# configure
-Enter configuration commands, one per line. End with CNTL/Z.
-c0(config)# show full-configuration
-no service pad
-no ip domain-lookup
-no ip http server
-no ip http secure-server
-ip routing
-ip source-route
-ip vrf my-forward
-bgp next-hop Loopback 1
-!
-...
-```
-
-5. Use the NSO CLI to get the configuration:
-
-```bash
-$ ncs_cli -C -u admin
-
-admin connected from 127.0.0.1 using console on ncs
-admin@ncs# devices sync-from
-sync-result {
- device c0
- result true
-}
-sync-result {
- device c1
- result true
-}
-sync-result {
- device c2
- result true
-}
-admin@ncs# config
-Entering configuration mode terminal
-
-admin@ncs(config)# show full-configuration devices device c0 config
-devices device c0
- config
- no ios:service pad
- ios:ip vrf my-forward
- bgp next-hop Loopback 1
- !
- ios:ip community-list 1 permit
- ios:ip community-list 2 deny
- ios:ip community-list standard s permit
- no ios:ip domain-lookup
- no ios:ip http server
- no ios:ip http secure-server
- ios:ip routing
-...
-```
-
-6. Finally, set VLAN information manually on a device to prepare for the mapping later.
-
-```cli
-admin@ncs(config)# devices device c0 config ios:vlan 1234
-admin@ncs(config)# devices device c0 config ios:interface
- FastEthernet 1/0 switchport mode trunk
-admin@ncs(config-if)# switchport trunk allowed vlan 1234
-admin@ncs(config-if)# top
-
-admin@ncs(config)# show configuration
-devices device c0
- config
- ios:vlan 1234
- !
- ios:interface FastEthernet1/0
- switchport mode trunk
- switchport trunk allowed vlan 1234
- exit
- !
-!
-
-admin@ncs(config)# commit
-```
-
-### Creating a Service Package
-
-1. In the run-time directory, you created:
-
-```bash
-$ ls -F1
-README.ncs
-README.netsim
-logs/
-ncs-cdb/
-ncs.conf
-netsim/
-packages/
-scripts/
-state/
-```
-
-Note the `packages` directory, `cd` to it:
-
-```bash
-$ cd packages
-$ ls -l
-total 8
-cisco-ios -> .../packages/neds/cisco-ios
-```
-
-Currently, there is only one package, the Cisco IOS NED.
-
-2. We will now create a new package that will contain the VLAN service.
-
-```bash
-$ ncs-make-package --service-skeleton java vlan
-$ ls
-cisco-ios vlan
-```
-
-This creates a package with the following structure:
-
-
Package Structure
-
-During the rest of this section, we will work with the `vlan/src/yang/vlan.yang` and `vlan/src/java/src/com/example/vlan/vlanRFS.java` files.
-
-### The Service Model
-
-So, if a user wants to create a new VLAN in the network what should the parameters be? Edit the `vlan/src/yang/vlan.yang` according to below:
-
-```yang
- augment /ncs:services {
- list vlan {
- key name;
-
- uses ncs:service-data;
- ncs:servicepoint "vlan-servicepoint";
- leaf name {
- type string;
- }
-
- leaf vlan-id {
- type uint32 {
- range "1..4096";
- }
- }
-
- list device-if {
- key "device-name";
- leaf device-name {
- type leafref {
- path "/ncs:devices/ncs:device/ncs:name";
- }
- }
- leaf interface {
- type string;
- }
- }
- }
- }
-```
-
-This simple VLAN service model says:
-
-1. We give a VLAN a name, for example `net-1`.
-2. The VLAN has an id from 1 to 4096.
-3. The VLAN is attached to a list of devices and interfaces. In order to make this example as simple as possible the interface name is just a string. A more correct and useful example would specify this is a reference to an interface to the device, but for now it is better to keep the example simple.
-
-The VLAN service list is augmented into the services tree in NSO. This specifies the path to reach VLANs in the CLI, REST, etc. There are no requirements on where the service shall be added into NCS, if you want VLANs to be at the top level, simply remove the augments statement.
-
-Make sure you keep the lines generated by the `ncs-make-package`:
-
-```
-uses ncs:service-data;
-ncs:servicepoint "vlan-servicepoint";
-```
-
-The two lines tell NSO that this is a service. The first line expands to a YANG structure that is shared amongst all services. The second line connects the service to the Java callback.
-
-To build this service model, `cd` to `packages/vlan/src` and type `make` (assumes that you have the prerequisite `make` build system installed).
-
-```bash
-$ cd packages/vlan/src/
-$ make
-```
-
-We can now test the service model by requesting NSO to reload all packages:
-
-```bash
-$ ncs_cli -C -U admin
-admin@ncs# packages reload
->>> System upgrade is starting.
->>> Sessions in configure mode must exit to operational mode.
->>> No configuration changes can be performed until upgrade has completed.
->>> System upgrade has completed successfully.
-result Done
-```
-
-You can also stop and start NSO, but then you have to pass the option `--with-package-reload` when starting NSO. This is important, NSO does not by default take any changes in packages into account when restarting. When packages are reloaded the `state/packages-in-use` is updated.
-
-Now, create a VLAN service, (nothing will happen since we have not defined any mapping).
-
-```bash
-admin@ncs(config)# services vlan net-0 vlan-id 1234 device-if c0 interface 1/0
-admin@ncs(config-device-if-c0)# top
-admin@ncs(config)# commit
-```
-
-Now, let us move on and connect that to some device configuration using Java mapping. Note well that Java mapping is not needed, templates are more straightforward and recommended but we use this as a "Hello World" introduction to Java service programming in NSO. Also at the end, we will show how to combine Java and templates. Templates are used to define a vendor-independent way of mapping service attributes to device configuration and Java is used as a thin layer before the templates to do logic, call-outs to external systems, etc.
-
-### Managing the NSO Java VM
-
-The default configuration of the Java VM is:
-
-```cli
-admin@ncs(config)# show full-configuration java-vm | details
-java-vm stdout-capture enabled
-java-vm stdout-capture file ./logs/ncs-java-vm.log
-java-vm connect-time 60
-java-vm initialization-time 60
-java-vm synchronization-timeout-action log-stop
-```
-
-By default, NCS will start the Java VM by invoking the command `$NCS_DIR/bin/ncs-start-java-vm`. That script will invoke
-
-```bash
-$ java com.tailf.ncs.NcsJVMLauncher
-```
-
-The class `NcsJVMLauncher` contains the `main()` method. The started Java VM will automatically retrieve and deploy all Java code for the packages defined in the load path of the `ncs.conf` file. No other specification than the `package-meta-data.xml` for each package is needed.
-
-The verbosity of Java error messages can be controlled by:
-
-```bash
-admin@ncs(config)# java-vm exception-error-message verbosity
-Possible completions:
- standard trace verbose
-```
-
-For more details on the Java VM settings, see [NSO Java VM](../../core-concepts/nso-virtual-machines/nso-java-vm.md).
-
-### A First Look at Java Development
-
-The service model and the corresponding Java callback are bound by the servicepoint name. Look at the service model in `packages/vlan/src/yang`:
-
-
-
-Modify the generated code to include the print "Hello World!" statement in the same way. Re-build the package:
-
-```bash
-$ cd packages/vlan/src/
-$ make
-```
-
-Whenever a package has changed, we need to tell NSO to reload the package. There are three ways:
-
-1. Just reload the implementation of a specific package, will not load any model changes: `admin@ncs# packages package vlan redeploy`.
-2. Reload all packages including any model changes: `admin@ncs# packages reload`.
-3. Restart NSO with reload option: `$ncs --with-package-reload`.
-
-When that is done we can create a service (or modify an existing one) and the callback will be triggered:
-
-```cli
-admin@ncs(config)# vlan net-0 vlan-id 888
-admin@ncs(config-vlan-net-0)# commit
-```
-
-Now, have a look at the `logs/ncs-java-vm.log`:
-
-```bash
-$ tail ncs-java-vm.log
-...
- 03-Mar-2014::16:55:23.705 NcsMain JVM-Launcher: \
- - REDEPLOY PACKAGE COLLECTION --> OK
- 03-Mar-2014::16:55:23.705 NcsMain JVM-Launcher: \
- - REDEPLOY ["vlan"] --> DONE
- 03-Mar-2014::16:55:23.706 NcsMain JVM-Launcher: \
- - DONE COMMAND --> REDEPLOY_PACKAGE
- 03-Mar-2014::16:55:23.706 NcsMain JVM-Launcher: \
- - READ SOCKET =>
-Hello World!
-```
-
-Tailing the `ncs-java-vm.log` is one way of developing. You can also start and stop the Java VM explicitly and see the trace in the shell. To do this, tell NSO not to start the VM by adding the following snippet to `ncs.conf`:
-
-```xml
-
- false
-
-```
-
-Then, after restarting NSO or reloading the configuration, from the shell prompt:
-
-```bash
-$ ncs-start-java-vm
-.....
-.. all stdout from JVM
-```
-
-So modifying or creating a VLAN service will now have the "Hello World!" string show up in the shell. You can modify the package, then reload/redeploy, and see the output.
-
-### Using Eclipse
-
-To use a GUI-based IDE Eclipse, first generate an environment for Eclipse:
-
-```bash
-$ ncs-setup --eclipse-setup
-```
-
-This will generate two files, `.classpath` and `.project`. If we add this directory to Eclipse as a **File** -> **New** -> J**ava Project**, uncheck the **Use default location** and enter the directory where the `.classpath` and `.project` have been generated.
-
-We are immediately ready to run this code in Eclipse.
-
-
Creating the Project in Eclipse
-
-All we need to do is choose the `main()` routine in the `NcsJVMLauncher` class. The Eclipse debugger works now as usual, and we can, at will, start and stop the Java code.
-
-{% hint style="warning" %}
-**Timeouts**
-
-A caveat worth mentioning here is that there exist a few timeouts between NSO and the Java code that will trigger when we are in the debugger. While developing with the Eclipse debugger and breakpoints, we typically want to disable these timeouts.
-
-First, we have the three timeouts in `ncs.conf` that matter. Set the three values of `/ncs-config/api/new-session-timeout`, `/ncs-config/api/query-timeout`, and `/ncs-config/api/connect-timeout` to a large value (see man page [ncs.conf(5)](../../../resources/man/ncs.conf.5.md) for a detailed description on what those values are). If these timeouts are triggered, NSO will close all sockets to the Java VM.
-
-```bash
-$ cp $NCS_DIR/etc/ncs/ncs.conf .
-```
-{% endhint %}
-
-Edit the file and enter the following XML entry just after the Webui entry:
-
-```xml
-
- PT1000S
- PT1000S
- PT1000S
-
-```
-
-Now, restart `ncs`, and from now on start it as:
-
-```bash
-$ ncs -c ./ncs.conf
-```
-
-You can verify that the Java VM is not running by checking the package status:
-
-```bash
-admin@ncs# show packages package vlan
-packages package vlan
- package-version 1.0
- description "Skeleton for a resource facing service - RFS"
- ncs-min-version 3.0
- directory ./state/packages-in-use/1/vlan
- component RFSSkeleton
- callback java-class-name [ com.example.vlan.vlanRFS ]
- oper-status java-uninitialized
-```
-
-Create a new project and start the launcher `main` in Eclipse:
-
-
Starting the NSO JVM from Eclipse
-
-You can start and stop the Java VM from Eclipse. Note well that this is not needed since the change cycle is: modify the Java code, `make` in the `src` directory, and then reload the package. All while NSO and the JVM are running.
-
-Change the VLAN service and see the console output in Eclipse:
-
-
Console Output in Eclipse
-
-Another option is to have Eclipse connect to the running VM. Start the VM manually with the `-d` option.
-
-```bash
-$ ncs-start-java-vm -d
-Listening for transport dt_socket at address: 9000
-NCS JVM STARTING
-...
-```
-
-Then you can set up Eclipse to connect to the NSO Java VM:
-
-
Connecting to NSO Java VM Remote with Eclipse
-
-In order for Eclipse to show the NSO code when debugging, add the NSO Source Jars (add external Jar in Eclipse):
-
-
Adding the NSO Source Jars
-
-Navigate to the service `create` for the VLAN service and add a breakpoint:
-
-
Setting a break-point in Eclipse
-
-Commit a change of a VLAN service instance and Eclipse will stop at the breakpoint:
-
-
Service Create breakpoint
-
-### Writing the Service Code
-
-#### **Fetching the Service Attributes**
-
-So the problem at hand is that we have service parameters and a resulting device configuration. Previously, we showed how to do that with templates. The same principles apply in Java. The service model and the device models are YANG models in NSO irrespective of the underlying protocol. The Java mapping code transforms the service attributes to the corresponding configuration leafs in the device model.
-
-The NAVU API lets the Java programmer navigate the service model and the device models as a DOM tree. Have a look at the `create` signature:
-
-```java
- @ServiceCallback(servicePoint="vlan-servicepoint",
- callType=ServiceCBType.CREATE)
- public Properties create(ServiceContext context,
- NavuNode service,
- NavuNode ncsRoot,
- Properties opaque)
- throws DpCallbackException {
-```
-
-Two NAVU nodes are passed: the actual service `service`instance and the NSO root `ncsRoot`.
-
-We can have a first look at NAVU by analyzing the first `try` statement:
-
-```
-try {
- // check if it is reasonable to assume that devices
- // initially has been sync-from:ed
- NavuList managedDevices =
- ncsRoot.container("devices").list("device");
- for (NavuContainer device : managedDevices) {
- if (device.list("capability").isEmpty()) {
- String mess = "Device %1$s has no known capabilities, " +
- "has sync-from been performed?";
- String key = device.getKey().elementAt(0).toString();
- throw new DpCallbackException(String.format(mess, key));
- }
- }
-```
-
-NAVU is a lazy evaluated DOM tree that represents the instantiated YANG model. So knowing the NSO model: `devices/device`, (`container/list`) corresponds to the list of capabilities for a device, this can be retrieved by `ncsRoot.container("devices").list("device")`.
-
-The `service` node can be used to fetch the values of the VLAN service instance:
-
-* `vlan/name`
-* `vlan/vlan-id`
-* `vlan/device-if/device and vlan/device-if/interface`
-
-The first snippet that iterates the service model and prints to the console looks like below:
-
-
The first Example
-
-The `com.tailf.conf` package contains Java Classes representing the YANG types like `ConfUInt32`.
-
-Try it out in the following sequence:
-
-1. **Rebuild the Java Code**: In `packages/vlan/src` type `make`.
-2. **Reload the Package**: In the NSO Cisco CLI, do `admin@ncs# packages package vlan redeploy`.
-3. **Create or Modify a `vlan` Service**: In NSO CLI, do `admin@ncs(config)# services vlan net-0 vlan-id 844 device-if c0 interface 1/0`, and commit.
-
-#### **Mapping Service Attributes to Device Configuration**
-
-
Fetching Values from the Service Instance
-
-Remember the `service` attribute is passed as a parameter to the create method. As a starting point, look at the first three lines:
-
-1. To reach a specific leaf in the model use the NAVU leaf method with the name of the leaf as a parameter. This leaf then has various methods like getting the value as a string.
-2. `service.leaf("vlan-id")` and `service.leaf(vlan._vlan_id_)` are two ways of referring to the VLAN-id leaf of the service. The latter alternative uses symbols generated by the compilation steps. If this alternative is used, you get the benefit of compilation time checking. From this leaf you can get the value according to the type in the YANG model `ConfUInt32` in this case.
-3. Line 3 shows an example of casting between types. In this case, we prepare the VLAN ID as a 16 unsigned int for later use.
-
-The next step is to iterate over the devices and interfaces. The NAVU `elements()` returns the elements of a NAVU list.
-
-
Iterating a List in the Service Model
-
-In order to write the mapping code, make sure you have an understanding of the device model. One good way of doing that is to create a corresponding configuration on one device and then display that with the pipe target `display xpath`. Below is a CLI output that shows the model paths for `FastEthernet 1/0`:
-
-```cli
-admin@ncs% show devices device c0 config ios:interface
- FastEthernet 1/0 | display xpath
-
-/devices/device[name='c0']/config/ios:interface/
- FastEthernet[name='1/0']/switchport/mode/trunk
-
-/devices/device[name='c0']/config/ios:interface/
- FastEthernet[name='1/0']/switchport/trunk/allowed/vlan/vlans [ 111 ]
-```
-
-Another useful tool is to render a tree view of the model:
-
-```bash
-$ pyang -f jstree tailf-ned-cisco-ios.yang -o ios.html
-```
-
-This can then be opened in a Web browser and model paths are shown to the right:
-
-
The Cisco IOS Model
-
-Now, we replace the print statements with setting real configuration on the devices.
-
-
Setting the VLAN List
-
-Let us walk through the above code line by line. The `device-name` is a `leafref`. The `deref` method returns the object that the `leafref` refers to. The `getParent()` might surprise the reader. Look at the path for a leafref: `/device/name/config/ios:interface/name`. The `name` leafref is the key that identifies a specific interface. The `deref` returns that key, while we want to have a reference to the interface, (`/device/name/config/ios:interface`), that is the reason for the `getParent()`.
-
-The next line sets the VLAN list on the device. Note well that this follows the paths displayed earlier using the NSO CLI. The `sharedCreate()` is important, it creates device configuration based on this service, and it says that other services might also create the same value, "shared". Shared create maintains reference counters for the created configuration in order for the service deletion to delete the configuration only when the last service is deleted. Finally, the interface name is used as a key to see if the interface exists, `"containsNode()"`.
-
-The last step is to update the VLAN list for each interface. The code below adds an element to the VLAN `leaf-list`.
-
-```
-// The interface
-NavuNode theIf = feIntfList.elem(feIntfName);
-theIf.container("switchport").
- sharedCreate().
- container("mode").
- container("trunk").
- sharedCreate();
-// Create the VLAN leaf-list element
-theIf.container("switchport").
- container("trunk").
- container("allowed").
- container("vlan").
- leafList("vlans").
- sharedCreate(vlanID16);
-```
-
-Note that the code uses the `sharedCreate()` functions instead of `create()`, as the shared variants are preferred and a best practice.
-
-The above `create` method is all that is needed for create, read, update, and delete. NSO will automatically handle any changes, like changing the VLAN ID, adding an interface to the VLAN service, and deleting the service. This is handled by the FASTMAP engine, it renders any change based on the single definition of the create method.
-
-## Simple VLAN Service with Templates
-
-### Overview
-
-The mapping strategy using only Java is illustrated in the following figure.
-
-
Flat Mapping with Java
-
-This strategy has some drawbacks:
-
-* Managing different device vendors. If we would introduce more vendors in the network this would need to be handled by the Java code. Of course, this can be factored into separate classes in order to keep the general logic clean and just pass the device details to specific vendor classes, but this gets complex and will always require Java programmers to introduce new device types.
-* No clear separation of concerns, domain expertise. The general business logic for a service is one thing, detailed configuration knowledge of device types is something else. The latter requires network engineers and the first category is normally separated into a separate team that deals with OSS integration.
-
-Java and templates can be combined:
-
-
Two Layered Mapping using Feature Templates
-
-In this model, the Java layer focuses on required logic, but it never touches concrete device models from various vendors. The vendor-specific details are abstracted away using feature templates. The templates take variables as input from the service logic, and the templates in turn transform these into concrete device configuration. The introduction of a new device type does not affect the Java mapping.
-
-This approach has several benefits:
-
-* The service logic can be developed independently of device types.
-* New device types can be introduced at runtime without affecting service logic.
-* Separation of concerns: network engineers are comfortable with templates, they look like a configuration snippet. They have expertise in how configuration is applied to real devices. People defining the service logic often are more programmers, they need to interface with other systems, etc, this suites a Java layer.
-
-Note that the logic layer does not understand the device types, the templates will dynamically apply the correct leg of the template depending on which device is touched.
-
-### The VLAN Feature Template
-
-From an abstraction point of view, we want a template that takes the following variables:
-
-* VLAN ID
-* Device and interface
-
-So the mapping logic can just pass these variables to the feature template and it will apply it to a multi-vendor network.
-
-Create a template as described before.
-
-* Create a concrete configuration on a device, or several devices of different type
-* Request NSO to display that as XML
-* Replace values with variables
-
-This results in a feature template like below:
-
-```xml
-
-
-
-
-
-
-
-
- {$DEVICE}
-
-
-
- {$VLAN_ID}
-
-
-
-
- {$INTF_NAME}
-
-
-
-
- {$VLAN_ID}
-
-
-
-
-
-
-
-
-
-
-```
-
-This template only maps to Cisco IOS devices (the `xmlns="urn:ios"` namespace), but you can add "legs" for other device types at any point in time and reload the package.
-
-{% hint style="info" %}
-Nodes set with a template variable evaluating to the empty string are ignored, e.g., the setting \{$VAR}\ is ignored if the template variable $VAR evaluates to the empty string. However, this does not apply to XPath expressions evaluating to the empty string. A template variable can be surrounded by the XPath function string() if it is desirable to set a node to the empty string.
-{% endhint %}
-
-### The VLAN Java Logic
-
-The Java mapping logic for applying the template is shown below:
-
-
Mapping Logic using a Template
-
-Note that the Java code has no clue about the underlying device type, it just passes the feature variables to the template. At run-time, you can update the template with mapping to other device types. The Java code stays untouched, if you modify an existing VLAN service instance to refer to the new device type the `commit` will generate the corresponding configuration for that device.
-
-The smart reader will complain, "Why do we have the Java layer at all?", this could have been done as a pure template solution. That is true, but now this simple Java layer gives room for arbitrary complex service logic before applying the template.
-
-### Steps to Build a Java and Template Solution
-
-The steps to build the solution described in this section are:
-
-1. Create a run-time directory: `$ mkdir ~/service-template; cd ~/service-template`.
-2. Generate a netsim environment: `$ ncs-netsim create-network $NCS_DIR/packages/neds/cisco-ios 3 c`.
-3. Generate the NSO runtime environment: `$ ncs-setup --netsim-dir ./netsim --dest ./`.
-4. Create the VLAN package in the packages directory: `$ cd packages; ncs-make-package --service-skeleton java vlan`.
-5. Create a template directory in the VLAN package: `$ cd vlan; mkdir templates`.
-6. Save the above-described template in `packages/vlan/templates`.
-7. Create the YANG service model according to the above: `packages/vlan/src/yang/vlan.yang`.
-8. Update the Java code according to the above: `packages/vlan/src/java/src/com/example/vlan/vlanRFS.java`.
-9. Build the package: in `packages/vlan/src` do `make`.
-10. Start NSO.
-
-## Layer 3 MPLS VPN Service
-
-This service shows a more elaborate service mapping. It is based on the [examples.ncs/service-management/mpls-vpn-java](https://github.com/NSO-developer/nso-examples/tree/6.6/service-management/mpls-vpn-java) example.
-
-MPLS VPNs are a type of Virtual Private Network (VPN) that achieves segmentation of network traffic using Multiprotocol Label Switching (MPLS), often found in Service Provider (SP) networks. The Layer 3 variant uses BGP to connect and distribute routes between sites of the VPN.
-
-The figure below illustrates an example configuration for one leg of the VPN. Configuration items in bold are variables that are generated from the service inputs.
-
-
Example L3 VPN Device Configuration
-
-### Auxiliary Service Data
-
-Sometimes the input parameters are enough to generate the corresponding device configurations. But in many cases, this is not enough. The service mapping logic may need to reach out to other data in order to generate the device configuration. This is common in the following scenarios:
-
-* **Policies**: it might make sense to define policies that can be shared between service instances. The policies, for example, QoS, have data models of their own (not service models) and the mapping code reads from that.
-* **Topology Information**: the service mapping might need to know connected devices, like which PE the CE is connected to.
-* R**esources like VLAN IDs, and IP Addresses**: these might not be given as input parameters. This can be modeled separately in NSO or fetched from an external system.
-
-It is important to design the service model to consider the above examples: what is input? what is available from other sources? This example illustrates how to define QoS policies "on the side". A reference to an existing QoS policy is passed as input. This is a much better principle than giving all QoS parameters to every service instance. Note well that if you modify the QoS definitions that services are referring to, this will not change the existing services. In order to have the service to read the changed policies you need to perform a **re-deploy** on the service.
-
-This example also uses a list that maps every CE to a PE. This list needs to be populated before any service is created. The service model only has the CE as input parameter, and the service mapping code performs a lookup in this list to get the PE. If the underlying topology changes a service re-deploy will adopt the service to the changed CE-PE links. See more on topology below.
-
-NSO has a package to manage resources like VLAN and IP addresses as a pool within NSO. In this way the resources are managed within the transaction. The mapping code could also reach out externally to get resources. Nano services are recommended for this.
-
-### Topology
-
-Using topology information in the instantiation of an NSO service is a common approach, but also an area with many misconceptions. Just like a service in NSO takes a black-box view of the configuration needed for that service in the network NSO treats topologies in the same way. It is of course common that you need to reference topology information in the service but it is highly desirable to have a decoupled and self-sufficient service that only uses the part of the topology that is interesting/needed for the specific service should be used.
-
-Other parts of the topology could either be handled by other services or just let the network state sort it out - it does not necessarily relate to the configuration of the network. A routing protocol will for example handle the IP path through the network.
-
-It is highly desirable to not introduce unneeded dependencies towards network topologies in your service.
-
-To illustrate this, let's look at a Layer 3 MPLS VPN service. A logical overview of an MPLS VPN with three endpoints could look something like this. CE routers connecting to PE routers, that are connected to an MPLS core network. In the MPLS core network, there are a number of P routers.
-
-
Simple MPLS VPN Topology
-
-In the service model, you only want to configure the CE devices to use as endpoints. In this case, topology information could be used to sort out what PE router each CE router is connected to. However, what type of topology do you need? Lets look at a more detailed picture of what the L1 and L2 topology could look like for one side of the picture above.
-
-
L1-L2 Topology
-
-In pretty much all networks there is an access network between the CE and PE router. In the picture above the CE routers are connected to local Ethernet switches connected to a local Ethernet access network, connected through optical equipment. The local Ethernet access network is connected to a regional Ethernet access network, connected to the PE router. Most likely the physical connections between the devices in this picture have been simplified, in the real world redundant cabling would be used. The example above is of course only one example of how an access network could look like and it is very likely that a service provider have different access technologies. For example Ethernet, ATM, or a DSL-based access network.
-
-Depending on how you design the L3VPN service, the physical cabling or the exact traffic path taken in the layer 2 Ethernet access network might not be that interesting, just like we don't make any assumptions or care about how traffic is transported over the MPLS core network. In both these cases we trust the underlying protocols handling state in the network, spanning tree in the Ethernet access network, and routing protocols like BGP in the MPLS cloud. Instead in this case, it could make more sense to have a separate NSO service for the access network, both so it can be reused for both for example L3VPNs and L2VPN but also to not tightly couple to the access network with the L3VPN service since it can be different (Ethernet or ATM etc.).
-
-Looking at the topology again from the L3VPN service perspective, if services assume that the access network is already provisioned or taken care of by another service, it could look like this.
-
-
Black-box Topology
-
-The information needed to sort out what PE router a CE router is connected to as well as configuring both CE and PE routers is:
-
-* Interface on the CE router that is connected to the PE router, and IP address of that interface.
-* Interface on the PE router that is connected to the CE router, and IP address to the interface.
-
-### Creating a Multi-Vendor Service
-
-This section describes the creation of an MPLS L3VPN service in a multi-vendor environment by applying the concepts described above. The example discussed can be found in [examples.ncs/service-management/mpls-vpn-java](https://github.com/NSO-developer/nso-examples/tree/6.6/service-management/mpls-vpn-java). The example network consists of Cisco ASR 9k and Juniper core routers (P and PE) and Cisco IOS-based CE routers.
-
-The goal of the NSO service is to set up an MPLS Layer3 VPN on a number of CE router endpoints using BGP as the CE-PE routing protocol. Connectivity between the CE and PE routers is done through a Layer2 Ethernet access network, which is out of the scope of this service. In a real-world scenario, the access network could for example be handled by another service.
-
-In the example network, we can also assume that the MPLS core network already exists and is configured.
-
-
The MPLS VPN Example
-
-#### **YANG Service Model Design**
-
-When designing service YANG models there are a number of things to take into consideration. The process usually involves the following steps:
-
-1. Identify the resulting device configurations for a deployed service instance.
-2. Identify what parameters from the device configurations are common and should be put in the service model.
-3. Ensure that the scope of the service and the structure of the model work with the NSO architecture and service mapping concepts. For example, avoid unnecessary complexities in the code to work with the service parameters.
-4. Ensure that the model is structured in a way so that integration with other systems north of NSO works well. For example, ensure that the parameters in the service model map to the needed parameters from an ordering system.
-
-Steps 1 and 2: Device Configurations and Identifying Parameters:
-
-Deploying an MPLS VPN in the network results in the following basic CE and PE configurations. The snippets below only include the Cisco IOS and Cisco IOS-XR configurations. In a real process, all applicable device vendor configurations should be analyzed.
-
-{% code title="CE Router Config" %}
-```
- interface GigabitEthernet0/1.77
- description Link to PE / pe0 - GigabitEthernet0/0/0/3
- encapsulation dot1Q 77
- ip address 192.168.1.5 255.255.255.252
- service-policy output volvo
- !
- policy-map volvo
- class class-default
- shape average 6000000
- !
- !
- interface GigabitEthernet0/11
- description volvo local network
- ip address 10.7.7.1 255.255.255.0
- exit
- router bgp 65101
- neighbor 192.168.1.6 remote-as 100
- neighbor 192.168.1.6 activate
- network 10.7.7.0
- !
-```
-{% endcode %}
-
-{% code title="PE Router Config" %}
-```
- vrf volvo
- address-family ipv4 unicast
- import route-target
- 65101:1
- exit
- export route-target
- 65101:1
- exit
- exit
- exit
- policy-map volvo-ce1
- class class-default
- shape average 6000000 bps
- !
- end-policy-map
- !
- interface GigabitEthernet 0/0/0/3.77
- description Link to CE / ce1 - GigabitEthernet0/1
- ipv4 address 192.168.1.6 255.255.255.252
- service-policy output volvo-ce1
- vrf volvo
- encapsulation dot1q 77
- exit
- router bgp 100
- vrf volvo
- rd 65101:1
- address-family ipv4 unicast
- exit
- neighbor 192.168.1.5
- remote-as 65101
- address-family ipv4 unicast
- as-override
- exit
- exit
- exit
- exit
-```
-{% endcode %}
-
-The device configuration parameters that need to be uniquely configured for each VPN have been marked in bold.
-
-Steps 3 and 4: Model Structure and Integration with other Systems:
-
-When configuring a new MPLS l3vpn in the network we will have to configure all CE routers that should be interconnected by the VPN, as well as the PE routers they connect to.
-
-However, when creating a new l3vpn service instance in NSO it would be ideal if only the endpoints (CE routers) are needed as parameters to avoid having knowledge about PE routers in a northbound order management system. This means a way to use topology information is needed to derive or compute what PE router a CE router is connected to. This makes the input parameters for a new service instance very simple. It also makes the entire service very flexible, since we can move CE and PE routers around, without modifying the service configuration.
-
-Resulting YANG Service Model:
-
-```yang
-container vpn {
-
- list l3vpn {
- tailf:info "Layer3 VPN";
-
- uses ncs:service-data;
- ncs:servicepoint l3vpn-servicepoint;
-
- key name;
- leaf name {
- tailf:info "Unique service id";
- type string;
- }
- leaf as-number {
- tailf:info "MPLS VPN AS number.";
- mandatory true;
- type uint32;
- }
-
- list endpoint {
- key id;
- leaf id {
- tailf:info "Endpoint identifier";
- type string;
- }
- leaf ce-device {
- mandatory true;
- type leafref {
- path "/ncs:devices/ncs:device/ncs:name";
- }
- }
- leaf ce-interface {
- mandatory true;
- type string;
- }
- leaf ip-network {
- tailf:info “private IP network”;
- mandatory true;
- type inet:ip-prefix;
- }
- leaf bandwidth {
- tailf:info "Bandwidth in bps";
- mandatory true;
- type uint32;
- }
- }
- }
-}
-```
-
-The snipped above contains the l3vpn service model. The structure of the model is very simple. Every VPN has a name, an as-number, and a list of all the endpoints in the VPN. Each endpoint has:
-
-* A unique ID.
-* A reference to a device (a CE router in our case).
-* A pointer to the LAN local interface on the CE router. This is kept as a string since we want this to work in a multi-vendor environment.
-* LAN private IP network.
-* Bandwidth on the VPN connection.
-
-To be able to derive the CE to PE connections we use a very simple topology model. Notice that this YANG snippet does not contain any service point, which means that this is not a service model but rather just a YANG schema letting us store information in CDB.
-
-```yang
-container topology {
- list connection {
- key name;
- leaf name {
- type string;
- }
- container endpoint-1 {
- tailf:cli-compact-syntax;
- uses connection-grouping;
- }
- container endpoint-2 {
- tailf:cli-compact-syntax;
- uses connection-grouping;
- }
- leaf link-vlan {
- type uint32;
- }
- }
-}
-
-grouping connection-grouping {
- leaf device {
- type leafref {
- path "/ncs:devices/ncs:device/ncs:name";
- }
- }
- leaf interface {
- type string;
- }
- leaf ip-address {
- type tailf:ipv4-address-and-prefix-length;
- }
-}
-```
-
-The model basically contains a list of connections, where each connection points out the device, interface, and IP address in each of the connections.
-
-### Defining the Mapping
-
-Since we need to look up which PE routers to configure using the topology model in the mapping logic it is not possible to use a declarative configuration template-based mapping. Using Java and configuration templates together is the right approach.
-
-The Java logic lets you set a list of parameters that can be consumed by the configuration templates. One huge benefit of this approach is that all the parameters set in the Java code are completely vendor-agnostic. When writing the code, there is no need for knowledge of what kind of devices or vendors exist in the network, thus creating an abstraction of vendor-specific configuration. This also means that in to create the configuration template there is no need to have knowledge of the service logic in the Java code. The configuration template can instead be created and maintained by subject matter experts, the network engineers.
-
-With this service mapping approach, it makes sense to modularize the service mapping by creating configuration templates on a per-feature level, creating an abstraction for a feature in the network. In this example means, we will create the following templates:
-
-* CE router
-* PE router
-
-This is both to make services easier to maintain and create but also to create components that are reusable from different services. This can of course be even more detailed with templates with for example BGP or interface configuration if needed.
-
-Since the configuration templates are decoupled from the service logic it is also possible to create and add additional templates in a running NSO system. You can for example add a CE router from a new vendor to the layer3 VPN service by only creating a new configuration template, using the set of parameters from the service logic, to a running NSO system without changing anything in the other logical layers.
-
-
The MPLS VPN Example
-
-#### **The Java Code**
-
-The Java code part for the service mapping is very simple and follows the following pseudo code steps:
-
-```
-READ topology
-FOR EACH endpoint
- USING topology
-DERIVE connected-pe-router
- READ ce-pe-connection
- SET pe-parameters
- SET ce-parameters
- APPLY TEMPLATE l3vpn-ce
- APPLY TEMPLATE l3vpn-pe
-```
-
-This section will go through relevant parts of Java outlined by the pseudo-code above. The code starts with defining the configuration templates and reading the list of endpoints configured and the topology. The Navu API is used for navigating the data models.
-
-```
-Template peTemplate = new Template(context, "l3vpn-pe");
- Template ceTemplate = new Template(context,"l3vpn-ce");
- NavuList endpoints = service.list("endpoint");
- NavuContainer topology = ncsRoot.getParent().
- container("http://com/example/l3vpn").
- container("topology");
-```
-
-The next step is iterating over the VPN endpoints configured in the service, finding out connected PE router using small helper methods navigating the configured topology.
-
-```
- for(NavuContainer endpoint : endpoints.elements()) {
- try {
- String ceName = endpoint.leaf("ce-device").valueAsString();
- // Get the PE connection for this endpoint router
- NavuContainer conn =
- getConnection(topology,
- endpoint.leaf("ce-device").valueAsString());
- NavuContainer peEndpoint = getConnectedEndpoint(
- conn,ceName);
- NavuContainer ceEndpoint = getMyEndpoint(
- conn,ceName);
-```
-
-The parameter dictionary is created from the TemplateVariables class and is populated with appropriate parameters.
-
-```
-TemplateVariables vpnVar = new TemplateVariables();
-vpnVar.putQuoted("PE",peEndpoint.leaf("device").valueAsString());
-vpnVar.putQuoted("CE",endpoint.leaf("ce-device").valueAsString());
-vpnVar.putQuoted("VLAN_ID", vlan.valueAsString());
-vpnVar.putQuoted("LINK_PE_ADR",
-getIPAddress(peEndpoint.leaf("ip-address").valueAsString()));
-vpnVar.putQuoted("LINK_CE_ADR",
- getIPAddress(ceEndpoint. leaf("ip-address").valueAsString()));
-vpnVar.putQuoted("LINK_MASK",
- getNetMask(ceEndpoint. leaf("ip-address").valueAsString()));
-vpnVar.putQuoted("LINK_PREFIX",
- getIPPrefix(ceEndpoint.leaf("ip-address").valueAsString()));
-```
-
-The last step after all parameters have been set is applying the templates for the CE and PE routers for this VPN endpoint.
-
-```
-peTemplate.apply(service, vpnVar);
-ceTemplate.apply(service, vpnVar);
-```
-
-#### **Configuration Templates**
-
-The configuration templates are XML templates based on the structure of device YANG models. There is a very easy way to create the configuration templates for the service mapping if NSO is connected to a device with the appropriate configuration on it, using the following steps.
-
-1. Configure the device with the appropriate configuration.
-2. Add the device to NSO
-3. Sync the configuration to NSO.
-4. Display the device configuration in an XML template format.
-5. Save the XML template output to a configuration template file and replace configured values with parameters
-
-The commands in NSO give the following output. To make the example simpler, only the BGP part of the configuration is used:
-
-```cli
-admin@ncs# devices device ce1 sync-from
-admin@ncs# show running-config devices device ce1 config \
- ios:router bgp | display xml-template
-
-
-
-
- ce1
-
-
-
- 65101
-
- 192.168.1.6
- 100
-
-
-
- 10.7.7.0
-
-
-
-
-
-
-
-```
-
-The final configuration template with the replaced parameters marked in bold is shown below. If the parameter starts with a `$`-sign, it's taken from the Java parameter dictionary; otherwise, it is a direct xpath reference to the value from the service instance.
-
-```xml
-
-
-
- {$CE}
-
-
-
- {/as-number}
-
- {$LINK_PE_ADR}
- 100
-
-
-
- {$LOCAL_CE_NET}
-
-
-
-
-
-
-
-```
diff --git a/development/advanced-development/developing-services/services-deep-dive.md b/development/advanced-development/developing-services/services-deep-dive.md
deleted file mode 100644
index 9d6938ee..00000000
--- a/development/advanced-development/developing-services/services-deep-dive.md
+++ /dev/null
@@ -1,1389 +0,0 @@
----
-description: Deep dive into service implementation.
----
-
-# Services Deep Dive
-
-{% hint style="warning" %}
-**Before you Proceed**
-
-This section discusses the implementation details of services in NSO. The reader should already be familiar with the concepts described in the introductory sections and [Implementing Services](../../core-concepts/implementing-services.md).
-
-For an introduction to services, see [Develop a Simple Service](../../introduction-to-automation/develop-a-simple-service.md) instead.
-{% endhint %}
-
-## Common Service Model
-
-Each service type in NSO extends a part of the data model (a list or a container) with the `ncs:servicepoint` statement and the `ncs:service-data` grouping. This is what defines an NSO service.
-
-The service point instructs NSO to involve the service machinery (Service Manager) for management of that part of the data tree and the `ncs:service-data` grouping contains definitions common to all services in NSO. Defined in `tailf-ncs-services.yang`, `ncs:service-data` includes parts that are required for the proper operation of FASTMAP and the Service Manager. Every service must therefore use this grouping as part of its data model.
-
-In addition, `ncs:service-data` provides a common service interface to the users, consisting of:
-
-
-
-check-sync, deep-check-sync actions
-
-Check if the configuration created by the service is (still) there. That is, a redeploy of this service would produce no changes.\
-\
-The deep variant also retrieves the latest configuration from all the affected devices, making it relatively expensive.
-
-
-
-
-
-re-deploy, reactive-re-deploy actions
-
-Re-run the service mapping logic and deploy any changes from the current configuration. The non-reactive variant supports commit parameters, such as dry-run.
-
-The reactive variant performs an asynchronous re-deploy as the user of the original commit and uses the commit parameters from the latest commit of this service. It is often used with nano services, such as restarting a failed nano service.
-
-
-
-
-
-un-deploy action
-
-Remove the configuration produced by the service instance but keep the instance data, allowing a redeploy later. This action effectively deactivates the service while keeping it in the system.
-
-
-
-
-
-get-modifications action
-
-Show the changes in the configuration that this service instance produced. Behaves as if this was the only service that made the changes.
-
-
-
-
-
-touch action
-
-Available in the configure mode, it marks the service as being changed and allows redeploying multiple services in the same transaction.
-
-
-
-
-
-directly-modified, modified containers
-
-List devices and services the configuration produced by this service affects directly or indirectly (through other services).
-
-
-
-
-
-used-by-customer-service leaf-list
-
-List of customer services (defined under `/services/customer-service`) that this service is part of. Customer service is an optional concept that allows you to group multiple NSO services as belonging to the same customer.
-
-
-
-
-
-commit-queue container
-
-Contains commit queue items related to this service. See [Commit Queue](../../../operation-and-usage/operations/nso-device-manager.md#user_guide.devicemanager.commit-queue) for details.
-
-
-
-
-
-created, last-modified, last-run leafs
-
-Date and time of the main service events.
-
-
-
-
-
-log container
-
-Contains log entries for important service events, such as those related to the commit queue or generated by user code. Defined in `tailf-ncs-log.yang`.
-
-
-
-
-
-plan-location leaf
-
-Location of the plan data if the service plan is used. See [Nano Services for Staged Provisioning](../../core-concepts/nano-services.md) for more on service plans and using alternative plan locations.
-
-
-
-While not part of `ncs:service-data` as such, you may consider the `service-commit-queue-event` notification part of the core service interface. The notification provides information about the state of the service when the service uses the commit queue. As an example, an event-driven application uses this notification to find out when a service instance has been deployed to the devices. See the `showcase_rc.py` script in [examples.ncs/scaling-performance/perf-stack](https://github.com/NSO-developer/nso-examples/tree/6.6/scaling-performance/perf-stack) for sample Python code, leveraging the notification. See `tailf-ncs-services.yang` for the full definition of the notification.
-
-NSO Service Manager is responsible for providing the functionality of the common service interface, requiring no additional user code. This interface is the same for classic and nano services, whereas nano services further extend the model.
-
-## Services and Transactions
-
-NSO calls into Service Manager when accessing actions and operational data under the common service interface, or when the service instance configuration data (the data under the service point) changes. NSO being a transactional system, configuration data changes happen in a transaction.
-
-When applied, a transaction goes through multiple stages, as shown by the progress trace (e.g. using `commit | details` in the CLI). The detailed output breaks up the transaction into four distinct phases:
-
-1. validation
-2. write-start
-3. prepare
-4. commit
-
-These phases deal with how the network-wide transactions work:
-
-The validation phase prepares and validates the new configuration (including NSO copy of device configurations), then the CDB processes the changes and prepares them for local storage in the write-start phase.
-
-The prepare stage sends out the changes to the network through the Device Manager and the HA system. The changes are staged (e.g. in the candidate data store) and validated if the device supports it, otherwise, the changes are activated immediately.
-
-If all systems took the new configuration successfully, enter the commit phase, marking the new NSO configuration as active and activating or committing the staged configuration on remote devices. Otherwise, enter the abort phase, discarding changes, and ask NEDs to revert activated changes on devices that do not support transactions (e.g. without candidate data store).
-
-
Typical Transaction Phases
-
-There are also two types of locks involved with the transaction that are of interest to the service developer; the service write lock and the transaction lock. The latter is a global lock, required to serialize transactions, while the former is a per-service-type lock for serializing services that cannot be run in parallel. See [Scaling and Performance Optimization](../scaling-and-performance-optimization.md) for more details and their impact on performance.
-
-The first phase, historically called validation, does more than just validate data and is the phase a service deals with the most. The other three support the NSO service framework but a service developer rarely interacts with directly.
-
-We can further break down the first phase into the following stages:
-
-1. rollback creation
-2. pre-transform validation
-3. transforms
-4. full data validation
-5. conflict check and transaction lock
-
-When the transaction starts applying, NSO captures the initial intent and creates a rollback file, which allows one to reverse or roll back the intent. For example, the rollback file might contain the information that you changed a service instance parameter but it would not contain the service-produced device changes.
-
-Then the first, partial validation takes place. It ensures the service input parameters are valid according to the service YANG model, so the service code can safely use provided parameter values.
-
-Next, NSO runs transaction hooks and performs the necessary transforms, which alter the data before it is saved, for example encrypting passwords. This is also where the Service Manager invokes FASTMAP and service mapping callbacks, recording the resulting changes. NSO takes service write locks in this stage, too.
-
-After transforms, there are no more changes to the configuration data, and the full validation starts, including YANG model constraints over the complete configuration, custom validation through validation points, and configuration policies (see [Policies](../../../operation-and-usage/operations/basic-operations.md#d5e319) in Operation and Usage).
-
-
Stages of Transaction Validation Phase
-
-Throughout the phase, the transaction engine makes checkpoints, so it can restart the transaction faster in case of concurrency conflicts. The check for conflicts happens at the end of this first phase when NSO also takes the global transaction lock. Concurrency is further discussed in [NSO Concurrency Model](../../core-concepts/nso-concurrency-model.md).
-
-## Service Callbacks
-
-The main callback associated with a service point is the create callback, designed to produce the required (new) configuration, while FASTMAP takes care of the other operations, such as update and delete.
-
-NSO implements two additional, optional callbacks for scenarios where create is insufficient. These are pre- and post-modification callbacks that NSO invokes before (pre) or after (post) create. These callbacks work outside of the scope tracked by FASTMAP. That is, changes done in pre- and post-modification do not automatically get removed during the update or delete of the service instance.
-
-For example, you can use the pre-modification callback to check the service prerequisites (pre-check) or make changes that you want persisted even after the service is removed, such as enabling some global device feature. The latter may be required when NSO is not the only system managing the device and removing the feature configuration would break non-NSO managed services.
-
-Similarly, you might use post-modification to reset the configuration to some default after the service is removed. Say the service configures an interface on a router for customer VPN. However, when the service is de-provisioned (removed), you don't want to simply erase the interface configuration. Instead, you want to put it in shutdown and configure it for a special, unused VLAN. The post-modification callback allows you to achieve this goal.
-
-The main difference from create callback is that pre- and post-modification are called on update and delete, as well as service create. Since the service data node may no longer exist in case of delete, the API for these callbacks does not supply the `service` object. Instead, the callback receives the operation and key path to the service instance. See the following API signatures for details.
-
-{% code title="Example: Service Callback Signatures in Python" %}
-```python
- @Service.pre_modification
- def cb_pre_modification(self, tctx, op, kp, root, proplist): ...
-
- @Service.create
- def cb_create(self, tctx, root, service, proplist): ...
-
- @Service.post_modification
- def cb_post_modification(self, tctx, op, kp, root, proplist): ...
-```
-{% endcode %}
-
-The Python callbacks use the following function arguments:
-
-* `tctx`: A TransCtxRef object containing transaction data, such as user session and transaction handle information.
-* `op`: Integer representing operation: create (`ncs.dp.NCS_SERVICE_CREATE`), update (`ncs.dp.NCS_SERVICE_UPDATE`), or delete (`ncs.dp.NCS_SERVICE_DELETE`) of the service instance.
-* `kp`: A HKeypathRef object with a key path of the affected service instance, such as `/svc:my-service{instance1}`.
-* `root`: A Maagic node for the root of the data model.
-* `service`: A Maagic node for the service instance.
-* `proplist`: Opaque service properties, see [Persistent Opaque Data](services-deep-dive.md#ch_svcref.opaque).
-
-{% code title="Example: Service Callback Signatures in Java" %}
-```java
- @ServiceCallback(servicePoint = "...",
- callType = ServiceCBType.PRE_MODIFICATION)
- public Properties preModification(ServiceContext context,
- ServiceOperationType operation,
- ConfPath path,
- Properties opaque)
- throws DpCallbackException;
-
- @ServiceCallback(servicePoint="...",
- callType=ServiceCBType.CREATE)
- public Properties create(ServiceContext context,
- NavuNode service,
- NavuNode ncsRoot,
- Properties opaque)
- throws DpCallbackException;
-
- @ServiceCallback(servicePoint = "...",
- callType = ServiceCBType.POST_MODIFICATION)
- public Properties postModification(ServiceContext context,
- ServiceOperationType operation,
- ConfPath path,
- Properties opaque)
- throws DpCallbackException;
-```
-{% endcode %}
-
-The Java callbacks use the following function arguments:
-
-* `context`: A ServiceContext object for accessing root and service instance NavuNode in the current transaction.
-* `operation`: ServiceOperationType enum representing operation: `CREATE`, `UPDATE`, `DELETE` of the service instance.
-* `path`: A ConfPath object with a key path of the affected service instance, such as `/svc:my-service{instance1}`.
-* `ncsRoot`: A NavuNode for the root of the `ncs` data model.
-* `service`: A NavuNode for the service instance.
-* `opaque`: Opaque service properties, see [Persistent Opaque Data](services-deep-dive.md#ch_svcref.opaque).
-
-See [examples.ncs/service-management/iface-postmod-py](https://github.com/NSO-developer/nso-examples/tree/6.6/service-management/iface-postmod-py) and [examples.ncs/service-management/iface-postmod-java](https://github.com/NSO-developer/nso-examples/tree/6.6/service-management/iface-postmod-java) examples for a sample implementation of the post-modification callback.
-
-Additionally, you may implement these callbacks with templates. Refer to [Service Callpoints and Templates](../../core-concepts/templates.md#ch_templates.servicepoint) for details.
-
-### Persistent Opaque Data
-
-FASTMAP greatly simplifies service code, so it usually only needs to deal with the initial mapping. NSO achieves this by first discarding all the configuration performed during the create callback of the previous run. In other words, the service create code always starts anew, with a blank slate.
-
-If you need to keep some private service data across runs of the create callback, or pass data between callbacks, such as pre- and post-modification, you can use opaque properties.
-
-The opaque object is available in the service callbacks as an argument, typically named `proplist` (Python) or `opaque` (Java). It contains a set of named properties with their corresponding values.
-
-If you wish to use the opaque properties, it is crucial that your code returns the properties object from the create call, otherwise, the service machinery will not save the new version.
-
-Compared to pre- and post-modification callbacks, which also persist data outside of FASTMAP, NSO deletes the opaque data when the service instance is deleted, unlike with the pre- and post-modification data.
-
-{% code title="Example: Using proplist in Python" %}
-```python
- @Service.create
- def cb_create(self, tctx, root, service, proplist):
- intf = None
- # proplist is of type list[tuple[str, str]]
- for pname, pvalue in proplist:
- if pname == 'INTERFACE':
- intf = pvalue
-
- if intf is None:
- intf = '...'
- proplist.append('INTERFACE', intf)
-
- return proplist
-```
-{% endcode %}
-
-{% code title="Example: Using opaque in Java" %}
-```java
- public Properties create(ServiceContext context,
- NavuNode service,
- NavuNode ncsRoot,
- Properties opaque)
- throws DpCallbackException {
- // In Java API, opaque is null when service instance is first created.
- if (opaque == null) {
- opaque = new Properties();
- }
- String intf = opaque.getProperty("INTERFACE");
- if (intf == null) {
- intf = "...";
- opaque.setProperty("INTERFACE", intf);
- }
-
- return opaque;
- }
-```
-{% endcode %}
-
-The [examples.ncs/service-management/iface-postmod-py](https://github.com/NSO-developer/nso-examples/tree/6.6/service-management/iface-postmod-py) and [examples.ncs/service-management/iface-postmod-java](https://github.com/NSO-developer/nso-examples/tree/6.6/service-management/iface-postmod-java) examples showcase the use of opaque properties.
-
-## Defining Static Service Conflicts
-
-NSO by default enables concurrent scheduling and execution of services to maximize throughput. However, concurrent execution can be problematic for non-thread-safe services or services that are known to always conflict with themselves or other services, such as when they read and write the same shared data. See [NSO Concurrency Model](../../core-concepts/nso-concurrency-model.md) for details.
-
-To prevent NSO from scheduling a service instance together with an instance of another service, declare a static conflict in the service model, using the `ncs:conflicts-with` extension. The following example shows a service with two declared static conflicts, one with itself and one with another service, named `other-service`.
-
-{% code title="Example: Service with Declared Static Conflicts" %}
-```yang
- list example-service {
- key name;
- leaf name {
- type string;
- }
- uses ncs:service-data;
- ncs:servicepoint example-service {
- ncs:conflicts-with example-service;
- ncs:conflicts-with other-service;
- }
- }
-```
-{% endcode %}
-
-This means each service instance will wait for other service instances that have started sooner than this one (and are of example-service or other-service type) to finish before proceeding.
-
-## Reference Counting Overlapping Configuration
-
-FASTMAP knows that a particular piece of configuration belongs to a service instance, allowing NSO to revert the change as needed. But what happens when several service instances share a resource that may or may not exist before the first service instance is created? If the service implementation naively checks for existence and creates the resource when it is missing, then the resource will be tracked with the first service instance only. If, later on, this first instance is removed, then the shared resource is also removed, affecting all other instances.
-
-A well-known solution to this kind of problem is reference counting. NSO uses reference counting by default with the XML templates and Python Maagic API, while in Java Maapi and Navu APIs, the `sharedCreate()`, `sharedSet()`, and `sharedSetValues()` functions need to be used.
-
-When enabled, the reference counter allows FASTMAP algorithm to keep track of the usage and only delete data when the last service instance referring to this data is removed.
-
-Furthermore, containers and list items created using the `sharedCreate()` and `sharedSetValues()` functions also get an additional attribute called `backpointer`. (But this functionality is currently not available for individual leafs.)
-
-`backpointer` points back to the service instance that created the entity in the first place. This makes it possible to look at part of the configuration, say under `/devices` tree, and answer the question: which parts of the device configuration were created by which service?
-
-To see reference counting in action, start the [examples.ncs/service-management/implement-a-service/iface-v3](https://github.com/NSO-developer/nso-examples/tree/6.6/service-management/implement-a-service/iface-v3) example with `make demo` and configure a service instance.
-
-```bash
-admin@ncs(config)# iface instance1 device c1 interface 0/1 ip-address 10.1.2.3 cidr-netmask 28
-admin@ncs(config)# commit
-```
-
-Then configure another service instance with the same parameters and use the `display service-meta-data` pipe to show the reference counts and backpointers:
-
-```bash
-admin@ncs(config)# iface instance2 device c1 interface 0/1 ip-address 10.1.2.3 cidr-netmask 28
-admin@ncs(config)# commit dry-run
-cli {
- local-node {
- data +iface instance2 {
- + device c1;
- + interface 0/1;
- + ip-address 10.1.2.3;
- + cidr-netmask 28;
- +}
- }
-}
-admin@ncs(config)# commit and-quit
-admin@ncs# show running-config devices device c1 config interface\
- GigabitEthernet 0/1 | display service-meta-data
-devices device c1
- config
- ! Refcount: 2
- ! Backpointer: [ /iface:iface[iface:name='instance1'] /iface:iface[iface:name='instance2'] ]
- interface GigabitEthernet0/1
- ! Refcount: 2
- ip address 10.1.2.3 255.255.255.240
- ! Refcount: 2
- ! Backpointer: [ /iface:iface[iface:name='instance1'] /iface:iface[iface:name='instance2'] ]
- ip dhcp snooping trust
- exit
- !
-!
-```
-
-Notice how `commit dry-run` produces no new device configuration but the system still tracks the changes. If you wish, remove the first instance and verify the `GigabitEthernet 0/1` configuration is still there, but is gone when you also remove the second one.
-
-But what happens if the two services produce different configurations for the same node? Say, one sets the IP address to `10.1.2.3` and the other to `10.1.2.4`. Conceptually, these two services are incompatible, and instantiating both at the same time produces a broken configuration (instantiating the second service instance breaks the configuration for the first). What is worse is that the current configuration depends on the order the services were deployed or re-deployed. For example, re-deploying the first service will change the configuration from `10.1.2.4` back to `10.1.2.3` and vice versa. Such inconsistencies break the declarative configuration model and really should be avoided.
-
-In practice, however, NSO does not prevent services from producing such configuration. But note that we strongly recommend against it and that there are associated limitations, such as service un-deploy not reverting configuration to that produced by the other instance (but when all services are removed, the original configuration is still restored).
-
-The `commit | debug` service pipe command warns about any such conflict that it finds but may miss conflicts on individual leafs. The best practice is to use integration tests in the service development life cycle to ensure there are no conflicts, especially when multiple teams develop their own set of services that are to be deployed on the same NSO instance.
-
-## Stacked Services
-
-Much like a service in NSO can provision device configurations, it can also provision other, non-device data, as well as other services. We call the approach of services provisioning other services 'service stacking' and the services that are involved — 'stacked'.
-
-Service stacking concepts usually come into play for bigger, more complex services. There are a number of reasons why you might prefer stacked services to a single monolithic one:
-
-* Smaller, more manageable services with simpler logic.
-* Separation of concerns and responsibility.
-* Clearer ownership across teams for (parts of) overall service.
-* Smaller services reusable as components across the solution.
-* Avoiding overlapping configuration between service instances causing conflicts, such as using one service instance per device (see examples in [Designing for Maximal Transaction Throughput](../scaling-and-performance-optimization.md#ncs.development.scaling.throughput)).
-
-Stacked services are also the basis for LSA, which takes this concept even further. See [Layered Service Architecture](../../../administration/advanced-topics/layered-service-architecture.md) for details.
-
-The standard naming convention with stacked services distinguishes between a Resource-Facing Service (RFS), that directly configures one or more devices, and a Customer-Facing Service (CFS), that is the top-level service, configuring only other services, not devices. There can be more than two layers of services in the stack, too.
-
-While NSO does not prevent a single service from configuring devices as well as services, in the majority of cases this results in a less clean design and is best avoided.
-
-Overall, creating stacked services is very similar to the non-stacked approach. First, you can design the RFS services as usual. Actually, you might take existing services and reuse those. These then become your lower-level services, since they are lower in the stack.
-
-Then you create a higher-level service, say a CFS, that configures another service, or a few, instead of a device. You can even use a template-only service to do that, such as:
-
-{% code title="Example: Template for Configuring Another Service (Stacking)" %}
-```xml
-
-
- instance1
- c1
- 0/1
- 10.1.2.3
- 28
-
-
-```
-{% endcode %}
-
-The preceding example references an existing `iface` service, such as the one in the [examples.ncs/service-management/implement-a-service/iface-v3](https://github.com/NSO-developer/nso-examples/tree/6.6/service-management/implement-a-service/iface-v3) example. The output shows hard-coded values but you can change those as you would for any other service.
-
-In practice, you might find it beneficial to modularize your data model and potentially reuse parts in both, the lower- and higher-level service. This avoids duplication while still allowing you to directly expose some of the lower-level service functionality through the higher-level model.
-
-The most important principle to keep in mind is that the data created by any service is owned by that service, regardless of how the mapping is done (through code or templates). If the user deletes a service instance, FASTMAP will automatically delete whatever the service created, including any other services. Likewise, if the operator directly manipulates service data that is created by another service, the higher-level service becomes out of sync. The **check-sync** service action checks this for services as well as devices.
-
-In stacked service design, the lower-level service data is under the control of the higher-level service and must not be directly manipulated. Only the higher-level service may manipulate that data. However, two higher-level services may manipulate the same structures, since NSO performs reference counting (see [Reference Counting Overlapping Configuration](services-deep-dive.md#ch_svcref.refcount)).
-
-## Stacked Service Design
-
-Designing services in NSO offers a great deal of flexibility with multiple approaches available to suit different needs. But what’s the best way to go about it? At its core, a service abstracts a network service or functionality, bridging user-friendly inputs with network configurations. This definition leaves the implementation open-ended, providing countless possibilities for designing and building services. However, there are certain techniques and best practices that can help enhance performance and simplify ongoing maintenance, making your services more efficient and easier to manage.
-
-Regardless of the type of service chosen—whether Java, Python, or plain template services—there are certain design patterns that can be followed to improve their long-term effectiveness. Rather than diving into API-level specifics, we’ll focus on higher-level design principles, with an emphasis on leveraging the stacked service approach for maximum efficiency and scalability.
-
-### Service Performance
-
-When designing a service, the first step is to identify the functionality of the network service and the corresponding device configurations it encompasses. The service should then be designed to generate those configurations. These configurations can either be static—hard-coded into the service if they remain consistent across all instances—or dynamic, represented as variables that adapt based on the service’s input parameters.
-
-The flexibility in service design is virtually limitless, as both Java and Python can be used to define services, allowing for the generation of static or dynamic configurations based on minimal input. Ultimately, the goal is to have the service efficiently represent as much of the required device configuration as possible, while minimizing the number of input parameters.
-
-When striving to achieve the goal of producing comprehensive device configurations, it's common to end up with a service that generates an extensive set of configurations. At first glance, this might seem ideal; however, it can introduce significant performance challenges.
-
-### Service Bottlenecks
-
-As the volume of a service's device configurations increases, its performance often declines. Both creating and modifying the service take longer, regardless of whether the change involves a single line of configuration or the entire set. In fact, the execution time of the service remains consistent for all modifications and increases proportionally with the size of the configurations it generates.
-
-The underlying reason for this behavior is tied to FASTMAP. Without delving too deeply into its mechanics, FASTMAP essentially runs the service logic anew with every deploy or re-deploy (modification), regenerating all the device configurations from scratch. This process not only re-executes user-defined logic—whether in Java, Python, or templates—but also tasks NSO with generating the reverse diffset for the service. As the size of the reverse diffset grows, so does the computational load, leading to slower performance.
-
-From this, it's clear that writing efficient service logic is crucial. Optimizing the time complexity of operations within the service callbacks will naturally improve performance, just as with any other software. However, there's a less obvious yet equally important factor to consider: minimizing the service diffset. A smaller diffset results in better performance overall.
-
-At first glance, this might seem to contradict the initial goal of representing as much configuration as possible with minimal input parameters. This apparent conflict is where the concept of stacked services comes into play, offering a way to balance these priorities effectively.
-
-We want a service to generate as much configuration as possible, but it doesn’t need to handle everything on its own. While a single service becomes slower as it takes on more, distributing the workload across multiple services introduces a new dimension of optimization.
-
-For example, consider a simple service that configures interface descriptions. While not a real network service, it serves as a useful illustration of the impact of heavy operations and large diffsets. Let's explore how this approach can help optimize performance.
-
-```yang
-list python-service {
- key name;
- leaf name {
- type string;
- }
-
- uses ncs:service-data;
- ncs:servicepoint python-service-servicepoint;
-
- list device {
- key name;
- leaf name {
- type leafref {
- path "/ncs:devices/ncs:device/ncs:name";
- }
- }
- leaf number-of-interfaces {
- type uint32;
- }
- }
-}
-```
-
-Each service instance will take, as input, a list of devices to configure and the number of interfaces to be configured for each device.
-
-{% code overflow="wrap" %}
-```python
-@Service.create
-def cb_create(self, tctx, root, service, proplist):
- self.log.info('Service create(service=', service._path, ')')
-
- for d in service.device:
- for i in range(d.number_of_interfaces):
- root.ncs__devices.device[d.name].config.ios__interface.GigabitEthernet.create(i).description = 'Managed by NSO'
-```
-{% endcode %}
-
-The callback will then iterate through each provided device, creating interfaces and assigning descriptions in a loop.
-
-When evaluating the service's performance, there are two key aspects to consider: the callback execution time and the time NSO takes to calculate the diffset. To analyze these, we can use NSO’s progress trace to gather statistics. Let’s start with an example involving three devices and 10 interfaces:
-
-```bash
-admin@ncs(config)# python-service test
-admin@ncs(config-python-service-test)# device CE-1 number-of-interfaces 10
-admin@ncs(config-device-CE-1)# exit
-admin@ncs(config-python-service-test)# device CE-2 number-of-interfaces 10
-admin@ncs(config-device-CE-2)# exit
-admin@ncs(config-python-service-test)# device PE-1 number-of-interfaces 10
-admin@ncs(config-device-PE-1)#
-```
-
-The two key events we need to focus on are the create event for the service, which provides the execution time of the create callback, and the "saving reverse diff-set and applying changes" event, which shows how long NSO took to calculate the reverse diff-set.
-
-{% code overflow="wrap" %}
-```
-2-Jan-2025::09:48:18.110 trace-id=8a94e614-b426-430f-fcd3-4e0639b5cf40 span-id=c4a9037077c54402 parent-span-id=ff9ca4dccad15b30 usid=59 tid=132 datastore=running context=cli subsystem=service-manager service=/python-service[name='test'] create: ok (0.222 s)
-2-Jan-2025::09:48:18.198 trace-id=8a94e614-b426-430f-fcd3-4e0639b5cf40 span-id=2cdb960fde6f386e parent-span-id=ff9ca4dccad15b30 usid=59 tid=132 datastore=running context=cli subsystem=service-manager service=/python-service[name='test'] saving reverse diff-set and applying changes: ok (0.088 s)
-```
-{% endcode %}
-
-Let’s capture the same data for 100 and 1000 interfaces to compare the results.
-
-{% code title="100:" overflow="wrap" %}
-```
-2-Jan-2025::09:49:00.909 trace-id=87b153d7-edd0-120f-4810-cd13fa207abd span-id=37188aea51359bd4 parent-span-id=f55947230241d550 usid=59 tid=214 datastore=running context=cli subsystem=service-manager service=/python-service[name='test'] create: ok (2.316 s)
-2-Jan-2025::09:49:02.299 trace-id=87b153d7-edd0-120f-4810-cd13fa207abd span-id=6a9962e63805673e parent-span-id=f55947230241d550 usid=59 tid=214 datastore=running context=cli subsystem=service-manager service=/python-service[name='test'] saving reverse diff-set and applying changes: ok (1.389 s)
-```
-{% endcode %}
-
-{% code title="1000:" overflow="wrap" %}
-```
-2-Jan-2025::09:50:19.314 trace-id=4b144bc1-f493-a1c6-f1f0-9df45be7a567 span-id=7e7a805a711ae483 parent-span-id=867f790fef787fca usid=59 tid=293 datastore=running context=cli subsystem=service-manager service=/python-service[name='test'] create: ok (28.082 s)
-2-Jan-2025::09:50:34.261 trace-id=4b144bc1-f493-a1c6-f1f0-9df45be7a567 span-id=28a617b1279e8c56 parent-span-id=867f790fef787fca usid=59 tid=293 datastore=running context=cli subsystem=service-manager service=/python-service[name='test'] saving reverse diff-set and applying changes: ok (14.946 s)
-```
-{% endcode %}
-
-We can observe that the time scales proportionally with the workload in the create callback as well as the size of the diffset. To demonstrate that the time remains consistent regardless of the size of the modification, we add one more interface to the 1000 interfaces already configured.
-
-```bash
-admin@ncs(config)# commit dry-run
-cli {
- local-node {
- data devices {
- device CE-1 {
- config {
- interface {
- + GigabitEthernet 1000 {
- + description "Managed by NSO";
- + }
- }
- }
- }
- }
- python-service test {
- device CE-1 {
- - number-of-interfaces 1000;
- + number-of-interfaces 1001;
- }
- }
- }
-}
-```
-
-From the progress trace, we can see that adding one interface took about the same amount of time as adding 1000 interfaces.
-
-{% code overflow="wrap" %}
-```
-2-Jan-2025::09:57:40.581 trace-id=ab51722b-3be8-2a83-bc59-d7b40bfdedd3 span-id=e9039240e794e819 parent-span-id=df585fdf73c00df3 usid=75 tid=425 datastore=running context=cli subsystem=service-manager service=/python-service[name='test'] create: ok (24.900 s)
-2-Jan-2025::09:58:44.309 trace-id=ab51722b-3be8-2a83-bc59-d7b40bfdedd3 span-id=1e841bcb07685884 parent-span-id=df585fdf73c00df3 usid=75 tid=425 datastore=running context=cli subsystem=service-manager service=/python-service[name='test'] saving reverse diff-set and applying changes: ok (15.727 s)
-```
-{% endcode %}
-
-Fastmap offers significant benefits to our solution, but this performance trade-off is an unavoidable cost. As a result, our service will remain consistently slow for all modifications as long as it handles large-scale device configurations. To address this, our focus must shift to reducing the size of the device configuration.
-
-### Service Stacking
-
-The solution lies in distributing the configurations across multiple services while assigning the main service the role of managing these individual services. By analyzing the current service's functionality, we can easily identify how to break it down—by device. Instead of having a single service provisioning multiple devices, we will transition to a setup where one main service provisions multiple sub-services, with each sub-service responsible for provisioning a single device. The resulting structure will look as follows.
-
-We'll begin by renaming our `python-service` to `upper-python-service`. This distinction is purely for clarity and to differentiate the two service types. In practice, the naming itself is not critical, as long as it aligns with the desired naming conventions for the northbound API, which represents the customer-facing service. The `upper-python-service` will still function as the main service that users interact with to configure interfaces on multiple devices, just as in the previous example.
-
-```python
-list upper-python-service {
-
- key name;
- leaf name {
- type string;
- }
-
- uses ncs:service-data;
- ncs:servicepoint upper-python-service-servicepoint;
-
- list device {
- key name;
- leaf name {
- type leafref {
- path "/ncs:devices/ncs:device/ncs:name";
- }
- }
- leaf number-of-interfaces {
- type uint32;
- }
- }
-}
-```
-
-The `upper-python-service` however, will not provision any devices directly. Instead, it will delegate that responsibility to another layer of services by creating and managing those subordinate services.
-
-```python
-list lower-python-service {
-
- key "device name";
- leaf name {
- type string;
- }
-
- leaf device {
- type leafref {
- path "/ncs:devices/ncs:device/ncs:name";
- }
- }
-
- uses ncs:service-data;
- ncs:servicepoint lower-python-service-servicepoint;
-
- leaf number-of-interfaces {
- type uint32;
- }
-}
-```
-
-The `lower-python-service` will be created by the `upper-python-service` and will ultimately handle provisioning the device. This service is designed to take only a single device as input, which corresponds to the device it will provision. The behavior and interaction between the two services can be observed in the Python callbacks that define their logic.
-
-```python
-class UpperServiceCallbacks(Service):
- @Service.create
- def cb_create(self, tctx, root, service, proplist):
- self.log.info('Service create(service=', service._path, ')')
-
- for d in service.device:
- root.stacked_python_service__lower_python_service.create(d.name, service.name).number_of_interfaces = d.number_of_interfaces
-
-class LowerServiceCallbacks(Service):
- @Service.create
- def cb_create(self, tctx, root, service, proplist):
- self.log.info('Service create(service=', service._path, ')')
-
- for i in range(service.number_of_interfaces):
- root.ncs__devices.device[service.device].config.ios__interface.GigabitEthernet.create(i).description = 'Managed by NSO'
-```
-
-The upper service creates a lower service for each device, and each lower service is responsible for provisioning its assigned device and populating its interfaces. This approach distributes the workload, reducing the load on individual services. The upper service loops over the total number of devices and generates a diffset consisting of the input parameters for each lower service. Each lower service then loops over the interfaces for its specific device and creates a diffset covering all interfaces for that device.
-
-All of this happens within a single NSO transaction, ensuring that, from the user’s perspective, the behavior remains identical to the previous design.
-
-At this point, you might wonder: if this still occurs in a single transaction and the total number of loops and combined diffset size remain unchanged, how does this improve performance? That’s a valid observation. When creating a large dataset all at once, this approach doesn’t provide a performance gain—in fact, the addition of an extra service layer might introduce a minimal and negligible amount of overhead.
-
-However, the real benefit becomes apparent in update scenarios, as we’ll illustrate below.
-
-We begin by creating the service to configure 1000 interfaces for each device.
-
-```bash
-admin@ncs(config)# upper-python-service test device CE-1 number-of-interfaces 1000
-admin@ncs(config-device-CE-1)# top
-admin@ncs(config)# upper-python-service test device CE-2 number-of-interfaces 1000
-admin@ncs(config-device-CE-2)# top
-admin@ncs(config)# upper-python-service test device PE-1 number-of-interfaces 1000
-admin@ncs(config-device-PE-1)# commit
-```
-
-The execution time of the `upper-python-service` turned out to be relatively low, as expected. This is because it only involves a loop with three iterations, where data is passed from the input of the `upper-python-service` to each corresponding `lower-python-service`.
-
-Similarly, calculating the diffset is also efficient. The reverse diffset for the `upper-python-service` only includes the configuration for the `lower-python-services`, which consists of just a few lines. This minimal complexity keeps both execution time and diffset calculation fast and lightweight.
-
-{% code overflow="wrap" %}
-```
-2-Jan-2025::10:14:27.682 trace-id=2dc929ca-780d-b076-154a-16d0edc50d05 span-id=58c41383d602d7e4 parent-span-id=49f214d3c1e906fb usid=59 tid=132 datastore=running context=cli subsystem=service-manager service=/upper-python-service[name='test'] create: ok (0.012 s)
-2-Jan-2025::10:14:27.706 trace-id=2dc929ca-780d-b076-154a-16d0edc50d05 span-id=3dcdb68f79b38f78 parent-span-id=49f214d3c1e906fb usid=59 tid=132 datastore=running context=cli subsystem=service-manager service=/upper-python-service[name='test'] saving reverse diff-set and applying changes: ok (0.023 s)
-```
-{% endcode %}
-
-In the same transaction, we also observe the execution of the three `lower-python-services`.
-
-{% code overflow="wrap" %}
-```
-2-Jan-2025::10:14:35.205 trace-id=2dc929ca-780d-b076-154a-16d0edc50d05 span-id=1aa5131f96e2b4fe parent-span-id=9da61057b7e18fae usid=59 tid=132 datastore=running context=cli subsystem=service-manager service=/lower-python-service[name='test'][device='CE-1'] create: ok (7.492 s)
-2-Jan-2025::10:14:37.743 trace-id=2dc929ca-780d-b076-154a-16d0edc50d05 span-id=3dce5f82d6f5558f parent-span-id=9da61057b7e18fae usid=59 tid=132 datastore=running context=cli subsystem=service-manager service=/lower-python-service[name='test'][device='CE-1'] saving reverse diff-set and applying changes: ok (2.538 s)
-...
-2-Jan-2025::10:14:46.126 trace-id=2dc929ca-780d-b076-154a-16d0edc50d05 span-id=78201c416ffa5ca5 parent-span-id=056757c9dd26bb8e usid=59 tid=132 datastore=running context=cli subsystem=service-manager service=/lower-python-service[name='test'][device='CE-2'] create: ok (8.381 s)
-2-Jan-2025::10:14:48.455 trace-id=2dc929ca-780d-b076-154a-16d0edc50d05 span-id=5b4fd53af68d3233 parent-span-id=056757c9dd26bb8e usid=59 tid=132 datastore=running context=cli subsystem=service-manager service=/lower-python-service[name='test'][device='CE-2'] saving reverse diff-set and applying changes: ok (2.328 s)
-...
-2-Jan-2025::10:14:56.294 trace-id=2dc929ca-780d-b076-154a-16d0edc50d05 span-id=374cecf183a5065a parent-span-id=e513c0823e29256c usid=59 tid=132 datastore=running context=cli subsystem=service-manager service=/lower-python-service[name='test'][device='PE-1'] create: ok (7.837 s)
-2-Jan-2025::10:14:58.645 trace-id=2dc929ca-780d-b076-154a-16d0edc50d05 span-id=b0d42c480167757d parent-span-id=e513c0823e29256c usid=59 tid=132 datastore=running context=cli subsystem=service-manager service=/lower-python-service[name='test'][device='PE-1'] saving reverse diff-set and applying changes: ok (2.351 s)
-```
-{% endcode %}
-
-Each service callback took approximately 8 seconds to execute, and calculating the diffset took around 2.5 seconds per service. This results in a total callback execution time of about 24 seconds and a total diffset calculation time of around 8 seconds, which is less than the time required in the previous service design.
-
-So, what’s the advantage of stacking services like this? The real benefit becomes evident during updates. Let’s add an interface to device `CE-1`, just as we did with the previous design, to illustrate this.
-
-```bash
-admin@ncs(config)# upper-python-service test device CE-1 number-of-interfaces 1001
-admin@ncs(config-device-CE-1)# commit dry-run
-cli {
- local-node {
- data upper-python-service test {
- device CE-1 {
- - number-of-interfaces 1000;
- + number-of-interfaces 1001;
- }
- }
- lower-python-service test CE-1 {
- - number-of-interfaces 1000;
- + number-of-interfaces 1001;
- }
- devices {
- device CE-1 {
- config {
- interface {
- + GigabitEthernet 1000 {
- + description "Managed by NSO";
- + }
- }
- }
- }
- }
- }
-}
-```
-
-Observing the progress trace generated for this scenario would give a clearer understanding. From the trace, we see that the `upper-python-service` was invoked and executed just as quickly as it did during the initial deployment. The same applies to the callback execution and diffset calculation time for the `lower-python-service` handling `CE-1`.
-
-But what about `CE-2` and `PE-1`? Interestingly, there are no traces of these services in the log. That’s because they were never executed. The modification was passed only to the relevant `lower-python-service` for `CE-1`, while the other two services remained untouched.
-
-And that is the power of stacked services.
-
-### Resource-Facing Layer
-
-Does this mean the more we stack, the better? Should every single line of configuration be split into its own service? The answer is no. In most real-world cases, the primary performance bottleneck is the diffset calculation rather than the callback execution time. Service callbacks typically aren't computationally intensive, nor should they be.
-
-Stacked services are generally used to address issues with diffset calculation, and this strategy is only effective if we can reduce the diffset size of the "hottest" service. However, increasing the number of services managed by the upper service also increases the total configuration it must generate on each re-deploy. This trade-off needs careful consideration to strike the right balance.
-
-#### Modeling the Layer
-
-When restructuring a service into a stacked service model, the first target should always be devices. If a service configures multiple devices, it’s a good practice to split it up by adding another layer of services, ensuring that no more than one device is provisioned by any service at the lowest layer. This approach reduces the service's complexity, making it easier to maintain.
-
-Focusing on a single device per service also provides significant advantages in various scenarios, such as restoring consistency when a device goes out of sync, handling NED migrations, hardware upgrades, or even migrating a device between NSO instances.
-
-The lower service we created uses the device name as its key. The primary reason for this is to ensure a clear separation of service instances based on the devices they are deployed on. One key benefit of this approach is the ability to easily identify all services deployed on a specific device by simply filtering for that device. For example, after adding a few more services, you could list all services associated with a particular device using a `show` command similar to the following.
-
-```bash
-admin@ncs(config)# show full-configuration lower-python-service CE-1
-lower-python-service CE-1 another-instance
- number-of-interfaces 1
-!
-lower-python-service CE-1 test
- number-of-interfaces 1001
-!
-lower-python-service CE-1 yet-another-instance
- number-of-interfaces 1
-!
-```
-
-While the complete distribution of the service looks like this:
-
-```bash
-admin@ncs(config)# show full-configuration lower-python-service
-lower-python-service CE-1 another-instance
- number-of-interfaces 1
-!
-lower-python-service CE-1 test
- number-of-interfaces 1001
-!
-lower-python-service CE-1 yet-another-instance
- number-of-interfaces 1
-!
-lower-python-service CE-2 test
- number-of-interfaces 1000
-!
-lower-python-service PE-1 test
- number-of-interfaces 1000
-!
-```
-
-This approach provides an excellent way to maintain an overview of services deployed on each device. However, introducing new service types presents a challenge: you wouldn’t be able to see all service types with a single show command. For instance, `show lower-python-service ...` will only display instances of the `lower-python-service`. But what happens when the device also has L2VPNs, L3VPNs, or other service types, as it would in a real network?
-
-#### Organizing the Schema
-
-To address this, we can nest the services within another list. By organizing all services under a common structure, we enable the ability to view and manage multiple service types for a device in a unified manner, providing a comprehensive overview with a single command.
-
-To illustrate this approach, we need to introduce another service type. Moving beyond the dummy example, let’s use a more realistic scenario: the [mpls-vpn-simple](https://github.com/NSO-developer/nso-examples/tree/6.6/service-management/mpls-vpn-simple) example. We'll refactor this service to adopt the stacked service approach while maintaining the existing customer-facing interface.
-
-After the refactor, the service will shift from provisioning multiple devices directly through a single instance to creating a separate service instance for each device, VPN, and endpoint, what we call resource-facing services. These resource-facing services will be structured so that all device-specific services are grouped under a node for each device.
-
-This is accomplished by introducing a list of devices, modeled within a separate package. We’ll create this new package and call it `resource-facing-services`, with the following model definition:
-
-```yang
- container resource-facing-services {
- list device {
- description "All services on a device";
-
- key name;
- leaf name {
- type leafref {
- path "/ncs:devices/ncs:device/ncs:name";
- }
- }
- }
- }
-```
-
-This model allows us to organize services by device, providing a unified structure for managing and querying all services deployed on each device.
-
-Each element in this list will represent a device and all the services deployed on it. The model itself is empty, which is intentional, as each resource-facing service (RFS) will be added to this list through augmentation from its respective package. The YANG model for the RFS version of our L3VPN service is designed specifically to integrate seamlessly into this structure.
-
-```yang
- augment "/rfs:resource-facing-services/rfs:device" {
- list l3vpn-rfs {
- key "name endpoint-id";
-
- leaf name {
- tailf:info "Unique service id";
- tailf:cli-allow-range;
- type string;
- }
-
- leaf endpoint-id {
- tailf:info "Endpoint identifier";
- type string;
- }
- uses ncs:service-data;
- ncs:servicepoint l3vpn-rfs-servicepoint;
-
- leaf role {
- type enumeration {
- enum "ce";
- enum "pe";
- }
- }
-
- container remote {
- leaf device {
- type leafref {
- path "/rfs:resource-facing-services/rfs:device/rfs:name";
- }
- }
- leaf ip-address {
- type inet:ipv4-address;
- }
- }
-
- leaf as-number {
- description "AS used within all VRF of the VPN";
- tailf:info "MPLS VPN AS number.";
- mandatory true;
- type uint32;
- }
-
- container local {
- when "../role = 'ce'";
- uses endpoint-grouping;
- }
- container link {
- uses endpoint-grouping;
- }
- }
- }
-```
-
-We deploy an L3VPN to our network with two CE endpoints by creating the following `l3vpn` customer-facing service.
-
-```bash
-admin@ncs(config)# show full-configuration vpn
-vpn l3vpn volvo
- endpoint c1
- as-number 65001
- ce device CE-1
- ce local interface-name GigabitEthernet
- ce local interface-number 0/9
- ce local ip-address 192.168.0.1
- ce link interface-name GigabitEthernet
- ce link interface-number 0/2
- ce link ip-address 10.1.1.1
- pe device PE-1
- pe link interface-name GigabitEthernet
- pe link interface-number 0/0/0/1
- pe link ip-address 10.1.1.2
- !
- endpoint c2
- as-number 65001
- ce device CE-2
- ce local interface-name GigabitEthernet
- ce local interface-number 0/3
- ce local ip-address 192.168.1.1
- ce link interface-name GigabitEthernet
- ce link interface-number 0/1
- ce link ip-address 10.2.1.1
- pe device PE-1
- pe link interface-name GigabitEthernet
- pe link interface-number 0/0/0/2
- pe link ip-address 10.2.1.2
- !
-!
-```
-
-After deploying our service, we can quickly gain an overview of the services deployed on a device without needing to analyze or reverse-engineer its configurations. For example, we can see that the device `PE-1` is acting as a PE for two different endpoints within a VPN.
-
-```bash
-admin@ncs(config)# show full-configuration resource-facing-services device PE-1
-resource-facing-services device PE-1
- l3vpn-rfs volvo c1
- role pe
- as-number 65001
- link interface-name GigabitEthernet
- link interface-number 0/0/0/1
- link ip-address 10.1.1.2
- link remote ip-address 10.1.1.1
- !
- l3vpn-rfs volvo c2
- role pe
- as-number 65001
- link interface-name GigabitEthernet
- link interface-number 0/0/0/2
- link ip-address 10.2.1.2
- link remote ip-address 10.2.1.1
- !
-!
-```
-
-`CE-1` serves as a CE for that VPN.
-
-```bash
-admin@ncs(config)# show full-configuration resource-facing-services device CE-1
-resource-facing-services device CE-1
- l3vpn-rfs volvo c1
- role ce
- as-number 65001
- local interface-name GigabitEthernet
- local interface-number 0/9
- local ip-address 192.168.0.1
- link interface-name GigabitEthernet
- link interface-number 0/2
- link ip-address 10.1.1.1
- link remote ip-address 10.1.1.2
- !
-!
-```
-
-And `CE-2` serves as another CE for that VPN.
-
-```bash
-admin@ncs(config)# show full-configuration resource-facing-services device CE-2
-resource-facing-services device CE-2
- l3vpn-rfs volvo c2
- role ce
- as-number 65001
- local interface-name GigabitEthernet
- local interface-number 0/3
- local ip-address 192.168.1.1
- link interface-name GigabitEthernet
- link interface-number 0/1
- link ip-address 10.2.1.1
- link remote ip-address 10.2.1.2
- !
-!
-```
-
-## Caveats and Best Practices
-
-This section lists some specific advice for implementing services, as well as any known limitations you might run into.
-
-You may also obtain some useful information by using the `debug service` commit pipe command, such as `commit dry-run | debug service`. The command display the net effect of the service create code, as well as issue warnings about potentially problematic usage of overlapping shared data.
-
-* **Service callbacks must be deterministic**: NSO invokes service callbacks in a number of situations, such as for dry-run, check sync, and actual provisioning. If a service does not create the same configuration from the same inputs, NSO sees it as being out of sync, resulting in a lot of configuration churn and making it incompatible with many NSO features.\
- \
- If you need to introduce some randomness or rely on some other nondeterministic source of data, make sure to cache the values across callback invocations, such as by using opaque properties (see [Persistent Opaque Data](services-deep-dive.md#ch_svcref.opaque)) or persistent operational data (see [Operational Data](../../core-concepts/implementing-services.md#ch_services.oper)) populated in a pre-modification callback.
-* **Never overwrite service inputs**: Service input parameters capture client intent and a service should never change its own configuration. Such behavior not only muddles the intent but is also temporary when done in the create callback, as the changes are reverted on the next invocation.
-
- \
- If you need to keep some additional data that cannot be easily computed each time, consider using opaque properties (see [Persistent Opaque Data](services-deep-dive.md#ch_svcref.opaque)) or persistent operational data (see [Operational Data](../../core-concepts/implementing-services.md#ch_services.oper)) populated in a pre-modification callback.
-* **No service ordering in a transaction**: NSO is a transactional system and as such does not have the concept of order inside a single transaction. That means NSO does not guarantee any specific order in which the service mapping code executes if the same transaction touches multiple service instances. Likewise, your code should not make any assumptions about running before or after other service code.
-* **Return value of create callback**: The create callback is not the exclusive user of the opaque object; the object can be chained in several different callbacks, such as pre- and post-modification. Therefore, returning `None/null` from create callback is not a good practice. Instead, always return the opaque object even if the create callback does not use it.
-* **Avoid delete in service create**: Unlike creation, deleting configuration does not support reference counting, as there is no data left to reference count. This means the deleted elements are tied to the service instance that deleted them.
-
- \
- Additionally, FASTMAP must store the entire deleted tree and restore it on every service change or re-deploy, only to be deleted again. Depending on the amount of deleted data, this is potentially an expensive operation.
-
- \
- So, a general rule of thumb is to never use delete in service create code. If an explicit delete is used, `debug service` may display the following warning:\\
-
- ```
- *** WARNING ***: delete in service create code is unsafe if data is
- shared by other services
- ```
-
- \
- However, the service may also delete data implicitly, through `when` and `choice` statements in the YANG data model. If a `when` statement evaluates to false, the configuration tree below that node is deleted. Likewise, if a `case` is set in a `choice` statement, the previously set `case` is deleted. This has the same limitations as an explicit delete.
-
- \
- To avoid these issues, create a separate service, that only handles deletion, and use it in the main service through the stacked service design (see [Stacked Services](services-deep-dive.md#ch_svcref.stacking)). This approach allows you to reference count the deletion operation and contains the effect of restoring deleted data through a small, rarely-changing helper service. See [examples.ncs/service-management/shared-delete](https://github.com/NSO-developer/nso-examples/tree/6.6/service-management/shared-delete) for an example.
-
- \
- Alternatively, you might consider pre- and post-modification callbacks for some specific cases.
-* **Prefer `shared*()` functions**: Non-shared create and set operations in the Java and Python low-level API do not add reference counts or backpointer information to changed elements. In case there is overlap with another service, unwanted removal can occur. See [Reference Counting Overlapping Configuration](services-deep-dive.md#ch_svcref.refcount) for details.
-
- \
- In general, you should prefer `sharedCreate()`, `sharedSet()`, `sharedSetValues()`, and `loadConfigCmds()`. If non-shared variants are used in a shared context, `service debug` displays a warning, such as:\\
-
- ```
- *** WARNING ***: set in service create code is unsafe if data is
- shared by other services
- ```
-
- \
- Likewise, do not use other MAAPI `load_config` variants from the service code. Use the `loadConfigCmds()` or `sharedSetValues()` function to load XML data from a file or a string. See [examples.ncs/scaling-performance/perf-bulkcreate](https://github.com/NSO-developer/nso-examples/tree/6.6/scaling-performance/perf-bulkcreate) for an example.
-* **Reordering ordered-by-user lists**: If the service code rearranges an ordered-by-user list with items that were created by another service, that other service becomes out of sync. In some cases, you might be able to avoid out-of-sync scenarios by leveraging special XML template syntax (see [Operations on ordered lists and leaf-lists](../../core-concepts/templates.md#ch_templates.order_ops)) or using service stacking with a helper service.
-
- In general, however, you should reconsider your design and try to avoid such scenarios.
-* **Automatic upgrade of keys for existing services is unsupported**: Service backpointers, described in [Reference Counting Overlapping Configuration](services-deep-dive.md#ch_svcref.refcount), rely on the keys that the service model defines to identify individual service instances. If you update the model by adding, removing, or changing the type of leafs used in the service list key, while there are deployed service instances, the backpointers will not be automatically updated. Therefore, it is best to not change the service list key.
-
- \
- A workaround, if the service key absolutely must change, is to first perform a no-networking undeploy of the affected service instances, then upgrade the model, and finally no-networking re-deploy the previously un-deployed services.
-* **Avoid conflicting intents**: Consider that a service is executed as part of a transaction. If, in the same transaction, the service gets conflicting intents, for example, it gets modified and deleted, the transaction is aborted. You must decide which intent has higher priority and design your services to avoid such situations.
-
-## Service Discovery and Import
-
-A very common situation, when NSO is deployed in an existing network, is that the network already has services implemented. These services may have been deployed manually or through an older provisioning system. To take full advantage of the new system, you should consider importing the existing services into NSO. The goal is to use NSO to manage existing service instances, along with adding new ones in the future.
-
-The process of identifying services and importing them into NSO is called Service Discovery and can be broken down into the following high-level parts:
-
-* Implementing the service to match existing device configuration.
-* Enumerating service instances and their parameters.
-* Amend the service metadata references with reconciliation.
-
-Ultimately, the problem that service discovery addresses is one of referencing or linking configuration to services. Since the network already contains target configuration, a new service instance in NSO produces no changes in the network. This means the new service in NSO by default does not own the network configuration. One side effect is that removing a service will not remove the corresponding device configuration, which is likely to interfere with service modification as well.
-
-
Service Reconciliation
-
-Some of the steps in the process can be automated, while others are mostly manual. The amount of work differs a lot depending on how structured and consistent the original deployment is.
-
-### Matching Configuration
-
-A prerequisite (or possibly the product in an iterative approach) is an NSO service that supports all the different variants of the configuration for the service that are used in the network. This usually means there will be a few additional parameters in the service model that allow selecting the variant of device configuration produced, as well as some covering other non-standard configurations (if such configuration is present).
-
-Alternatively, some parts of the configuration could be managed as out-of-band, in order to simplify and expedite the development of the service model and the mapping logic. But out-of-band data has more limitations when used with service updates. See [Out-of-band Interoperation](../../../operation-and-usage/operations/out-of-band-interoperation.md) for specific disadvantages and carefully consider if out-of-band data is really the right choice.
-
-In the simplest case, there is only one variant and that is the one that the service needs to produce. Let's take the [examples.ncs/service-management/implement-a-service/iface-v2-py](https://github.com/NSO-developer/nso-examples/tree/6.6/service-management/implement-a-service/iface-v2-py) example and consider what happens when a device already has an existing interface configuration.
-
-```bash
-admin@ncs# show running-config devices device c1 config\
- interface GigabitEthernet 0/1
-devices device c1
- config
- interface GigabitEthernet0/1
- ip address 10.1.2.3 255.255.255.240
- exit
- !
-!
-```
-
-Configuring a new service instance does not produce any new device configuration (notice that device c1 has no changes).
-
-```bash
-admin@ncs(config)# commit dry-run
-cli {
- local-node {
- data +iface instance1 {
- + device c1;
- + interface 0/1;
- + ip-address 10.1.2.3;
- + cidr-netmask 28;
- +}
- }
-}
-```
-
-However, when committed, NSO records the changes, just like in the case of overlapping configuration (see [Reference Counting Overlapping Configuration](services-deep-dive.md#ch_svcref.refcount)). The main difference is that there is only a single backpointer, to a newly configured service, but the `refcount` is 2. The other item, that contributes to the `refcount`, is the original device configuration. Which is why the configuration is not deleted when the service instance is.
-
-```bash
-admin@ncs# show running-config devices device c1 config interface\
- GigabitEthernet 0/1 | display service-meta-data
-devices device c1
- config
- ! Refcount: 2
- ! Backpointer: [ /iface:iface[iface:name='instance1'] ]
- interface GigabitEthernet0/1
- ! Refcount: 2
- ! Originalvalue: 10.1.2.3
- ip address 10.1.2.3 255.255.255.240
- exit
- !
-!
-```
-
-### Enumerating Instances
-
-A prerequisite for service discovery to work is that it is possible to construct a list of the already existing services. Such a list may exist in an inventory system, an external database, or perhaps just an Excel spreadsheet.
-
-You can import the list of services in a number of ways. If you are reading it in from a spreadsheet, a Python script using NSO API directly ([Basic Automation with Python](../../introduction-to-automation/basic-automation-with-python.md)) and a module to read Excel files is likely a good choice.
-
-{% code title="Example: Sample Service Excel import Script" %}
-```python
-import ncs
-from openpyxl import load_workbook
-
-def main()
- wb = load_workbook('services.xslx')
- sheet = wb[wb.sheetnames[0]]
-
- with ncs.maapi.single_write_trans('admin', 'python') as t:
- root = ncs.maagic.get_root(t)
- for sr in sheet.rows:
- # Suppose columns in spreadsheet are:
- # instance (A), device (B), interface (C), IP (D), mask (E)
- name = sr[0].value
- service = root.iface.create(name)
- service.device = sr[1].value
- service.interface = sr[2].value
- service.ip_address = sr[3].value
- service.cidr_netmask = sr[4].value
-
- t.apply()
-
-main()
-```
-{% endcode %}
-
-Or, you might generate an XML data file to import using the `ncs_load` command; use `display xml` filter to help you create a template:
-
-```bash
-admin@ncs# show running-config iface | display xml
-
-
- instance1
- c1
- 0/1
- 10.1.2.3
- 28
-
-
-```
-
-Regardless of the way you implement the data import, you can run into two kinds of problems.
-
-On one hand, the service list data may be incomplete. Suppose that the earliest service instances deployed did not take the network mask as a parameter. Moreover, for some specific reasons, a number of interfaces had to deviate from the default of 28 and that information was never populated back in the inventory for old services after the `netmask` parameter was added.
-
-Now the only place where that information is still kept may be the actual device configuration. Fortunately, you can access it through NSO, which may allow you to extract the missing data automatically, for example:
-
-```bash
-devconfig = root.devices.device[service.device].config
-intf = devconfig.interface.GigabitEthernet[service.interface]
-netmask = intf.ip.address.primary.mask
-cidr = IPv4Network(f'0.0.0.0/{netmask}').prefixlen
-```
-
-On the other hand, some parameters may be NSO specific, such as those controlling which variant of configuration to produce. Again, you might be able to use a script to find this information, or it could turn out that the configuration is too complex to make such a script feasible.
-
-In general, this can be the most tricky part of the service discovery process, making it very hard to automate. It all comes down to how good the existing data is. Keep in mind that this exercise is typically also a cleanup exercise, and every network will be different.
-
-### Reconciliation
-
-The last step is updating the metadata, telling NSO that a given service controls (owns) the device configuration that was already present when the NSO service was configured. This is called reconciliation and you achieve it using a special `re-deploy reconcile { attach-non-service-config }` action for the service.
-
-Let's examine the effects of this action on the following data:
-
-```bash
-admin@ncs# show running-config devices device c1 config\
- interface GigabitEthernet 0/1 | display service-meta-data
-devices device c1
- config
- ! Refcount: 2
- ! Backpointer: [ /iface:iface[iface:name='instance1'] ]
- interface GigabitEthernet0/1
- ! Refcount: 2
- ! Originalvalue: 10.1.2.3
- ip address 10.1.2.3 255.255.255.240
- exit
- !
-!
-```
-
-Having run the action, NSO has updated the `refcount` to remove the reference to the original device configuration:
-
-```bash
-admin@ncs# iface instance1 re-deploy reconcile
-admin@ncs# show running-config devices device c1 config\
- interface GigabitEthernet 0/1 | display service-meta-data
-devices device c1
- config
- ! Refcount: 1
- ! Backpointer: [ /iface:iface[iface:name='instance1'] ]
- interface GigabitEthernet0/1
- ! Refcount: 1
- ip address 10.1.2.3 255.255.255.240
- exit
- !
-!
-```
-
-What is more, the reconcile algorithm works even if multiple service instances share configuration. What if you had two instances of the `iface` service, instead of one?
-
-Before reconciliation, the device configuration would show a refcount of three.
-
-```bash
-admin@ncs# show running-config devices device c1 config\
- interface GigabitEthernet 0/1 | display service-meta-data
-devices device c1
- config
- ! Refcount: 3
- ! Backpointer: [ /iface:iface[iface:name='instance1'] /iface:iface[iface:name='instance2'] ]
- interface GigabitEthernet0/1
- ! Refcount: 3
- ! Originalvalue: 10.1.2.3
- ip address 10.1.2.3 255.255.255.240
- exit
- !
-!
-```
-
-Invoking `re-deploy reconcile` on either one or both of the instances makes the services sole owners of the configuration.
-
-```bash
-admin@ncs# show running-config devices device c1 config\
- interface GigabitEthernet 0/1 | display service-meta-data
-devices device c1
- config
- ! Refcount: 2
- ! Backpointer: [ /iface:iface[iface:name='instance1'] /iface:iface[iface:name='instance2'] ]
- interface GigabitEthernet0/1
- ! Refcount: 2
- ip address 10.1.2.3 255.255.255.240
- exit
- !
-!
-```
-
-This means the device configuration is removed only when you remove both service instances.
-
-```bash
-admin@ncs(config)# no iface instance1
-admin@ncs(config)# commit dry-run outformat native
-native {
-}
-admin@ncs(config)# no iface instance2
-admin@ncs(config)# commit dry-run outformat native
-native {
- device {
- name c1
- data no interface GigabitEthernet0/1
- }
-}
-```
-
-The reconcile operation only removes the references to the original configuration (without the service backpointer), so you can execute it as many times as you wish. Just note that it is part of a service re-deploy, with all the implications that brings, such as potentially deploying new configuration to devices when you change the service template.
-
-As an alternative to the `re-deploy reconcile`, you can initially add the service configuration with a `commit reconcile` variant, performing reconciliation right away.
-
-### Iterative Approach
-
-It is hard to design a service in one go when you wish to cover existing configurations that are exceedingly complex or have a lot of variance. In such cases, many prefer an iterative approach, where you tackle the problem piece-by-piece.
-
-Suppose there are two variants of the service configured in the network; `iface-v2-py` and the newer `iface-v3`, which produces a slightly different configuration. This is a typical scenario when a different (non-NSO) automation system is used and the service gradually evolves over time. Or, when a Method of Procedure (MOP) is updated if manual provisioning is used.
-
-We will tackle this scenario to show how you might perform service discovery in an iterative fashion. We shall start with the `iface-v2-py` as the first iteration of the `iface` service, which represents what configuration the service should produce to the best of our current knowledge.
-
-There are configurations for two service instances in the network already: For interfaces `0/1` and `0/2` on the `c1` device. So, configure the two corresponding `iface` instances.
-
-```bash
-admin@ncs(config)# commit dry-run
-cli {
- local-node {
- data +iface instance1 {
- + device c1;
- + interface 0/1;
- + ip-address 10.1.2.3;
- + cidr-netmask 28;
- +}
- +iface instance2 {
- + device c1;
- + interface 0/2;
- + ip-address 10.2.2.3;
- + cidr-netmask 28;
- +}
- }
-}
-admin@ncs(config)# commit
-```
-
-You can also use the `commit no-deploy` variant to add service parameters when a normal commit would produce device changes, which you do not want.
-
-Then use the `re-deploy reconcile { discard-non-service-config } dry-run` command to observe the difference between the service-produced configuration and the one present in the network.
-
-```bash
-admin@ncs# iface instance1 re-deploy reconcile\
- { discard-non-service-config } dry-run
-cli {
-}
-```
-
-For `instance1`, the config is the same, so you can safely reconcile it already.
-
-```bash
-admin@ncs# iface instance1 re-deploy reconcile
-```
-
-But interface 0/2 (`instance2`), which you suspect was initially provisioned with the newer version of the service, produces the following:
-
-```bash
-admin@ncs# iface instance2 re-deploy reconcile\
- { discard-non-service-config } dry-run
-cli {
- local-node {
- data devices {
- device c1 {
- config {
- interface {
- GigabitEthernet 0/2 {
- ip {
- dhcp {
- snooping {
- - trust;
- }
- }
- }
- }
- }
- }
- }
- }
-
- }
-}
-```
-
-The output tells you that the service is missing the `ip dhcp snooping trust` part of the interface configuration. Since the service does not generate this part of the configuration yet, running `re-deploy reconcile { discard-non-service-config }` (without dry-run) would remove the DHCP trust setting. This is not what we want.
-
-One option, and this is the default reconcile mode, would be to use `keep-non-service-config` instead of `discard-non-service-config`. But that would result in the service taking ownership of only part of the interface configuration (the IP address).
-
-Instead, the right approach is to add the missing part to the service template. There is, however, a little problem. Adding the DHCP snooping trust configuration unconditionally to the template can interfere with the other service instance, `instance1`.
-
-In some cases, upgrading the old configuration to the new variant is viable, but in most situations, you likely want to avoid all device configuration changes. For the latter case, you need to add another parameter to the service model that selects the configuration variant. You must update the template too, producing the second iteration of the service.
-
-```bash
-iface instance2
- device c1
- interface 0/2
- ip-address 10.2.2.3
- cidr-netmask 28
- variant v3
-!
-```
-
-With the updated configuration, you can now safely reconcile the `service2` service instance:
-
-```bash
-admin@ncs# iface instance2 re-deploy reconcile\
- { discard-non-service-config } dry-run
-cli {
-}
-admin@ncs# iface instance2 re-deploy reconcile
-```
-
-Nevertheless, keep in mind that the discard-non-service-config reconcile operation only considers parts of the device configuration under nodes that are created with the service mapping. Even if all data there is covered in the mapping, there could still be other parts that belong to the service but reside in an entirely different section of the device configuration (say DNS configuration under `ip name-server`, which is outside the `interface GigabitEthernet` part) or even a different device. That kind of configuration the `discard-non-service-config` option cannot find on its own and you must add manually.
-
-You can find the complete `iface` service as part of the [examples.ncs/service-management/discovery](https://github.com/NSO-developer/nso-examples/tree/6.6/service-management/discovery) example.
-
-Since there were only two service instances to reconcile, the process is now complete. In practice, you are likely to encounter multiple variants and many more service instances, requiring you to make additional iterations. But you can follow the iterative process shown here.
-
-## Partial Sync
-
-In some cases a service may need to rely on the actual device configurations to compute the changeset. It is often a requirement to pull the current device configurations from the network before executing such service. Doing a full `sync-from` on a number of devices is an expensive task, especially if it needs to be performed often. The alternative way in this case is using `partial-sync-from`.
-
-In cases where a multitude of service instances touch a device that is not entirely orchestrated using NSO, i.e. relying on the `partial-sync-from` feature described above, and the device needs to be replaced then all services need to be re-deployed. This can be expensive depending on the number of service instances. `Partial-sync-to` enables the replacement of devices in a more efficient fashion.
-
-`Partial-sync-from` and `partial-sync-to` actions allow to specify certain portions of the device's configuration to be pulled or pushed from or to the network, respectively, rather than the full config. These are more efficient operations on NETCONF devices and NEDs that support the partial-show feature. NEDs that do not support the partial-show feature will fall back to pulling or pushing the whole configuration.
-
-Even though `partial-sync-from` and `partial-sync-to` allows to pull or push only a part of the device's configuration, the actions are not allowed to break the consistency of configuration in CDB or on the device as defined by the YANG model. Hence, extra consideration needs to be given to dependencies inside the device model. If some configuration item A depends on configuration item B in the device's configuration, pulling only A may fail due to unsatisfied dependency on B. In this case, both A and B need to be pulled, even if the service is only interested in the value of A.
-
-It is important to note that `partial-sync-from` and `partial-sync-to` clear the transaction ID of the device in NSO unless the whole configuration has been selected (e.g. `/ncs:devices/ncs:device[ncs:name='ex0']/ncs:config`). This ensures NSO does not miss any changes to other parts of the device configuration but it does make the device out of sync.
-
-### Partial `sync-from`
-
-Pulling the configuration from the network needs to be initiated outside the service code. At the same time, the list of configuration subtrees required by a certain service should be maintained by the service developer. Hence it is a good practice for such a service to implement a wrapper action that invokes the generic `/devices/partial-sync-from` action with the correct list of paths. The user or application that manages the service would only need to invoke the wrapper action without needing to know which parts of the configuration the service is interested in.
-
-The snippet in the example below shows running the `partial-sync-from` action via Java, using the `router` device from the [examples.ncs/device-management/router-network](https://github.com/NSO-developer/nso-examples/tree/6.6/device-management/router-network) example.
-
-{% code title="Example of Running partial-sync-from Action via Java API" %}
-```java
- ConfXMLParam[] params = new ConfXMLParam[] {
- new ConfXMLParamValue("ncs", "path", new ConfList(new ConfValue[] {
- new ConfBuf("/ncs:devices/ncs:device[ncs:name='ex0']/"
- + "ncs:config/r:sys/r:interfaces/r:interface[r:name='eth0']"),
- new ConfBuf("/ncs:devices/ncs:device[ncs:name='ex1']/"
- + "ncs:config/r:sys/r:dns/r:server")
- })),
- new ConfXMLParamLeaf("ncs", "suppress-positive-result")};
- ConfXMLParam[] result =
- maapi.requestAction(params, "/ncs:devices/ncs:partial-sync-from");
-```
-{% endcode %}
diff --git a/development/advanced-development/development-environment-and-resources.md b/development/advanced-development/development-environment-and-resources.md
deleted file mode 100644
index d40eff02..00000000
--- a/development/advanced-development/development-environment-and-resources.md
+++ /dev/null
@@ -1,134 +0,0 @@
----
-description: Useful information to help you get started with NSO development.
----
-
-# Development Environment and Resources
-
-This section describes some recipes, tools, and other resources that you may find useful throughout development. The topics are tailored to novice users and focus on making development with NSO a more enjoyable experience.
-
-## Development NSO Instance
-
-Many developers prefer their own, dedicated NSO instance to avoid their work clashing with other team members. You can use either a local or remote Linux machine (such as a VM) or a macOS computer for this purpose.
-
-The advantage of running local Linux with a GUI or macOS is that it is easier to set up the Integrated Development Environment (IDE) and other tools when they run on the same system as NSO. However, many IDEs today also allow working remotely, such as through the SSH protocol, making the choice of local versus remote less of a concern.
-
-For development, using the so-called Local Install of NSO has some distinct advantages:
-
-* It does not require elevated privileges to install or run.
-* It keeps all NSO files in the same place (user-defined).
-* It allows you to quickly switch between projects and NSO versions.
-
-If you work with multiple projects in parallel, local install also allows you to take advantage of Python virtual environments to separate Python packages per project; simply start the NSO instance in an environment you have activated.
-
-The main downside of using a local install is that it differs slightly from a system (production) install, such as in the filesystem paths used and the out-of-the-box configuration.
-
-See [Local Install](../../administration/installation-and-deployment/local-install.md) for installation instructions.
-
-## Examples and Showcases
-
-There are a number of examples and showcases in this guide. We encourage you to follow them through. They are also a great reference if you are experimenting with a new feature and have trouble getting it to work; you can inspect and compare with the implementation in the example.
-
-To run the examples, you will need access to an NSO instance. A development instance described in this chapter is the perfect option for running locally. See [Running NSO Examples](../../administration/installation-and-deployment/post-install-actions/running-nso-examples.md).
-
-{% hint style="success" %}
-Cisco also provides an online sandbox and containerized environments, such as a [Learning Lab](https://developer.cisco.com/learning/labs/nso-examples) or [NSO Sandbox](https://developer.cisco.com/catalogs/sandbox/nso), designed for this purpose. Refer to the [NSO Docs Home](https://developer.cisco.com/docs/nso/) site for additional resources.
-{% endhint %}
-
-## IDE
-
-Modern IDEs offer many features on top of advanced file editing support, such as code highlighting, syntax checks, and integrated debugging. While the initial setup takes some effort, the benefits of using an IDE are immense.
-
-[Visual Studio Code](https://code.visualstudio.com/) (VS Code) is a freely available and extensible IDE. You can add support for Java, Python, and YANG languages, as well as remote access through SSH via VS Code extensions. Consider installing the following extensions:
-
-* **Python** by Microsoft: Adds Python support.
-* **Language Support for Java™** by Red Hat: Adds Java support.
-* **NSO Developer Studio** by Cisco: Adds NSO-specific features as described in [NSO Developer Studio](https://nso-docs.cisco.com/resources/platform-tools/nso-developer-studio).
-* **Remote - SSH** by Microsoft: Adds support for remote development.
-
-The Remote - SSH extension is especially useful when you must work with a system through an SSH session. Once you connect to the remote host by clicking the `><` button (typically found in the bottom-left corner of the VS Code window), you can open and edit remote files with ease. If you also want language support (syntax highlighting and alike), you may need to install VS Code extensions remotely. That is, install the extensions after you have connected to the remote host; otherwise, the extension installation screen might not show the option for installation on the connected host.
-
-
Using the Remote - SSH extension in VS Code
-
-You will also benefit greatly from setting up SSH certificate authentication if you are using an SSH session for your work.
-
-## Automating Instance Setup
-
-Once you get familiar with NSO development and gain some experience, a single NSO instance is likely to be insufficient, either because you need instances for unit testing, because you need one-off (throwaway) instances for an experiment, or for something else entirely.
-
-NSO includes tooling to help you quickly set up new local instances when such a need arises.
-
-The following recipe relies on the `ncs-setup` command, which is available in the local install variant and requires a correctly set up shell environment (e.g., running `source ncsrc`). See [Local Install](../../administration/installation-and-deployment/local-install.md) for details.
-
-A new instance typically needs a few things to be useful:
-
-* Packages
-* Initial data
-* Devices to manage
-
-In its simplest form, the `ncs-setup` invocation requires only a destination directory. However, you can specify additional packages to use with the `--package` option. Use the option to add as many packages as you need.
-
-Running `ncs-setup` creates the required filesystem structure for an NSO instance. If you wish to include initial configuration data, put the XML-encoded data in the `ncs-cdb` subdirectory, and NSO will load it at the first start, as described in [Initialization Files](../introduction-to-automation/cdb-and-yang.md#d5e268).
-
-NSO also needs to know about the managed devices. In case you are using `ncs-netsim` simulated devices (described in [Network Simulator](../../operation-and-usage/operations/network-simulator-netsim.md)), you can use the `--netsim-dir` option with `ncs-setup` to add them directly. Otherwise, you may need to create some initial XML files with the relevant device configuration data—much like how you would add a device to NSO manually.
-
-Most of the time, you must also invoke a sync with the device so that it performs correctly with NSO. If you wish to push some initial configuration to the device, you may add the configuration in the form of initial XML data and perform a `sync-to`. Alternatively, you can simply do a `sync-from`. You can use the `ncs_cmd` command for this purpose.
-
-Combining all of this together, consider the following example:
-
-1. Start by creating a new directory to hold the files:
-
- ```bash
- $ mkdir nso-throwaway
- $ cd nso-throwaway
- ```
-2. Create and start a few simulated devices with `ncs-netsim`, using `./netsim` as directory:
-
- ```bash
- $ ncs-netsim ncs-netsim create-network $NCS_DIR/packages/neds/cisco-ios-cli-3.8 3 c
- DEVICE c0 CREATED
- DEVICE c1 CREATED
- DEVICE c2 CREATED
- $ ncs-netsim start
- ```
-3. Next, create the running directory with the NED package for the simulated devices and one more package. Also, add configuration data to NSO on how to connect to these simulated devices.
-
- ```bash
- $ ncs-setup --dest ncs-run --netsim-dir ./netsim \
- --package $NCS_DIR/packages/neds/cisco-ios-cli-3.8 \
- --package $NCS_DIR/packages/neds/cisco-iosxr-cli-3.0
- ```
-4. Now you can add custom initial data as XML files to `ncs-run/ncs-cdb/`. Usually, you would use existing files, but you can also create them on the fly.
-
- ```bash
- $ cat >ncs-run/ncs-cdb/my_init.xml <<'EOF'
-
-
- 0
-
-
- EOF
- ```
-5. At this point, you are ready to start NSO:
-
- ```bash
- $ cd ncs-run
- $ ncs
- ```
-6. Finally, request an initial `sync-from`:
-
- ```bash
- $ ncs_cmd -u admin -c 'maction /devices/sync-from'
- sync-result begin
- device c0
- result true
- sync-result end
- sync-result begin
- device c1
- result true
- sync-result end
- sync-result begin
- device c2
- result true
- sync-result end
- ```
-7. The instance is now ready for work. Once you are finished, you can stop it with `ncs --stop`. Remember to also stop the simulated devices with `ncs-netsim stop` if you no longer need them. Then, delete the containing folder (`nso-throwaway`) to remove all the leftover files and data.
diff --git a/development/advanced-development/kicker.md b/development/advanced-development/kicker.md
deleted file mode 100644
index 2c324fd7..00000000
--- a/development/advanced-development/kicker.md
+++ /dev/null
@@ -1,661 +0,0 @@
----
-description: Trigger actions on events using Kicker.
----
-
-# Kicker
-
-Kickers constitute a declarative notification mechanism for triggering actions on certain stimuli like a database change or a received notification. These different stimuli and their kickers are defined separately as data kicker and notification kicker respectively.
-
-Common to all types of kickers is that they are declarative. Kickers are modeled in YANG and Kicker instances are stored as configuration data in CDB.
-
-Immediately after a transaction, that defines a new kicker, is committed, the kicker will be active. The same holds for removal. This also implies that the amount of programming for a kicker is a matter of implementing the action to be invoked.
-
-The data-kicker replicates much of the functionality otherwise attained by a CDB subscriber. Without the extra coding in registration and runtime daemon that comes with a CDB subscriber. The data-kicker works for all data providers.
-
-The notification-kicker reacts to notifications received by NSO using a defined notification subscription under `/ncs:devices/device/notifications/subscription`. This simplifies the handling of southbound emitted notifications. Traditionally these were chosen to be stored in CDB as operational data and a separate CDB subscriber was used to act on the received notifications. With the use of the notification-kicker, the CDB subscriber can be removed and there is no longer any need to store the received notification in CDB.
-
-## Kicker Action Invocation
-
-An action as defined by YANG contains an input parameter definition and an output parameter definition. However, a kicker that invokes an action treats the input parameters in a specific way.
-
-The kicker mechanism first checks if the input parameters match those in the `kicker:action-input-params` YANG grouping defined in the `tailf-kicker.yang` file. If so, the action will be invoked with the input parameters:
-
-* `kicker-id`: The id (name) of the invoking kicker.
-* `path`: The path of the current monitor triggering the kicker.
-* `tid`: The transaction ID to a synthetic transaction containing the changes that lead to the triggering of the kicker.
-
-The "synthetic" transaction implies that this is a copy of the original transaction that led to the kicker triggering. It only contains the data tree under the monitor. The original transaction is already committed and this data might no longer reflect the "running" datastore. It's useful in that the action implementation can attach and diff-iterate over this transaction and retrieve the certain changes that lead to the kicker invocation.
-
-If the kicker mechanism finds an action that does not match the above input parameters, it will invoke the action with an empty parameter list. This implies that a kicker action must either match the above `kicker:action-input-params` grouping precisely or accept an empty incoming parameter list. Otherwise, the action invocation will fail.
-
-## Data Kicker Concepts
-
-For a data kicker, the following principles hold:
-
-* Kickers are triggered by changes in the sub-tree indicated by the `monitor` parameter.
-* Actions are invoked during the commit phase. Hence aborted transactions never trigger kickers.
-* Kickers process both, configuration and operational data changes, but can be configured to react to a certain type of change only.
-* No distinction is made between CRUD types, i.e., create, delete, update. All changes potentially trigger kickers.
-* Kickers may have constraints that suppress invocations. Changes in the sub-tree indicated by `monitor` is a necessary but perhaps not a sufficient condition for the action to be invoked.
-
-### Generalized Monitors
-
-For a data kicker, it is the `monitor` that specifies which subtree under which a change should invoke the kicker. The `monitor` leaf is of type `node-instance-identifier` which means that predicates for keys are optional, i.e., keys may be omitted and then represent all instances for that key.
-
-The resulting evaluation of the monitor defines a node set. Each node in this node set will be the root context for any further xpath evaluations necessary before invoking the kicker action.
-
-The following example shows the strengths of using xpath to define the kickers. Say that we have a situation described by the following YANG model snippet:
-
-```yang
-module example {
- namespace "http://tail-f.com/ns/test/example";
- prefix example;
-
- ...
-
- container sys {
- list ifc {
- key name;
- max-elements 64;
- leaf name {
- type interfaceName;
- }
- leaf description {
- type string;
- }
- leaf enabled {
- type boolean;
- default true;
- }
- container hw {
- leaf speed {
- type interfaceSpeed;
- }
- leaf duplex {
- type interfaceDuplex;
- }
- leaf mtu {
- type mtuSize;
- }
- leaf mac {
- type string;
- }
- }
- list ip {
- key address;
- max-elements 1024;
- leaf address {
- type inet:ipv4-address;
- }
- leaf prefix-length {
- type prefixLengthIPv4;
- mandatory true;
- }
- leaf broadcast {
- type inet:ipv4-address;
- }
- }
-
- tailf:action local_me {
- tailf:actionpoint kick-me-point;
- input {
- }
- output {
- }
- }
- }
-
- tailf:action kick_me {
- tailf:actionpoint kick-me-point;
- input {
- }
- output {
- }
- }
-
- tailf:action iter_me {
- tailf:actionpoint kick-me-point;
- input {
- uses kicker:action-input-params;
- }
- output {
- }
- }
-
- }
-}
-```
-
-Then, we can define a kicker for monitoring a specific element in the list and call the correlated `local_me` action:
-
-```cli
-admin@ncs(config)# kickers data-kicker e1 \
-> monitor /sys/ifc[name='port-0'] \
->kick-node /sys/ifc[name='port-0']\
-> action-name local_me
-
-admin(config-data-kicker-e1)# commit
-Commit complete
-admin(config-data-kicker-e1)# top
-admin@ncs(config)# show full-configuration kickers
-kickers data-kicker e1
- monitor /sys/ifc[name='port-0']
- kick-node /sys/ifc[name='port-0']
- action-name local_me
-!
-```
-
-On the other hand, we can define a kicker for monitoring all elements of the list and call the correlated `local_me` action for each element:
-
-```cli
-admin@ncs(config)# kickers data-kicker e2 \
-> monitor /sys/ifc \
->kick-node . \
-> action-name local_me
-
-admin(config-data-kicker-e2)# commit
-Commit complete
-admin(config-data-kicker-e2)# top
-admin@ncs(config)# show full-configuration kickers
-kickers data-kicker e2
- monitor /sys/ifc
- kick-node .
- action-name local_me
-!
-```
-
-Here the `.` in the `kick-node` refers to the current node in the node set defined by the `monitor`.
-
-### Kicker Constraints/Filters
-
-A data kicker may be constrained by adding conditions that suppress invocations. The leaf `trigger-expression` contains a boolean XPath expression that is evaluated twice, before and after the change-set of the commit has been applied to the database(s).
-
-The XPath expression has to be evaluated twice to detect the change caused by the transaction.
-
-The two boolean results together with the leaf `trigger-type` control if the kicker should be triggered or not:
-
-* `enter-and-leave`: false -> true (i.e. positive flank) or true -> false (negative flank).
-* `enter`: false -> true.
-
-```cli
-admin(config)# kickers data-kicker k1 monitor /sys/ifc \
-> trigger-expr "hw/mtu > 800" \
-> trigger-type enter \
-> kick-node /sys \
-> action-name kick_me
-admin(config-data-kicker-k1)# commit
-Commit complete
-admin(config-data-kicker-k1)# top
-admin@ncs%
-admin@ncs% show kickers
-kickers data-kicker k1
- monitor /sys/ifc
- trigger-expr "hw/mtu > 800"
- trigger-type enter
- kick-node /sys
- action-name kick_me
-!
-```
-
-Start by changing the MTU to 800:
-
-```cli
-admin(config)# sys ifc port-0 hw mtu 800
-admin(config-ifc-port-0)# commit | debug kicker
- 2017-02-15T16:35:36.039 kicker: k1 at /kicker_example:sys/kicker_example:ifc[kicker_example:name='port-0'] changed;
-not invoking 'kick_me' trigger-expr false -> false
-Commit complete.
-```
-
-Since the `trigger-expression` evaluates to false, the kicker is not triggered. Let's try again:
-
-```cli
-admin(config)# sys ifc port-0 hw mtu 801
-admin(config-ifc-port-0)# commit | debug kicker
- 2017-02-15T16:35:36.039 kicker: k1 at /kicker_example:sys/kicker_example:ifc[kicker_example:name='port-0'] changed;
-invoking 'kick-me' trigger-expr false -> true
-Commit complete.
-```
-
-The `trigger-expression` can in some cases be used to refine the `monitor` of kicker, to avoid unnecessary evaluations. Let's change something below the `monitor` that doesn't touch the nodes in the `trigger-expression`:
-
-```cli
-admin(config)# sys ifc port-0 speed ten
-admin(config-ifc-port-0)# commit | debug kicker
-Commit complete.
-```
-
-Notice there was no evaluation done.
-
-### Variable Bindings
-
-A data kicker may be provided with a list of variables (named values). Each variable binding consists of a name and a XPath expression. The XPath expressions are evaluated on-demand, i.e. when used in either of `monitor` or `trigger-expression` nodes.
-
-```cli
-admin@ncs(config)# set kickers data-kicker k3 monitor $PATH/c
- kick-node /x/y[id='n1']
- action-name kick-me
- variable PATH value "/a/b[k1=3][k2='3']"
-admin@ncs(config)#
-```
-
-In the example above, `PATH` is defined and referred to by the `monitor` expression by using the expression `$PATH`.
-
-{% hint style="info" %}
-A monitor expression is not evaluated by the XPath engine. Hence no trace of the evaluation can be found in the the XPath log.
-
-Monitor expressions are expanded and installed in an internal data structure at kicker creation/compile time. XPath may be used while defining kickers by referring to a named XPath expression.
-{% endhint %}
-
-### A Simple Data Kicker Example
-
-This example is part of the [examples.ncs/service-management/website-service](https://github.com/NSO-developer/nso-examples/tree/6.6/service-management/website-service) example. It consists of an action and a `README_KICKER` file. For all kickers defined in this example, the same action is used. This action is defined in the `website-service` package.
-
-The following is the YANG snippet for the action definition from the `website.yang` file:
-
-```yang
-module web-site {
- namespace "http://examples.com/web-site";
- prefix wse;
-
- ...
-
- augment /ncs:services {
-
- ...
-
- container actions {
- tailf:action diffcheck {
- tailf:actionpoint diffcheck;
- input {
- uses kicker:action-input-params;
- }
- output {
- }
- }
- }
- }
-
-}
-```
-
-The implementation of the action can be found in the `WebSiteServiceRFS.java` class file. Since it takes the `kicker:action-input-params` as input, the `Tid` for the synthetic transaction is available. This transaction is attached and diff-iterated. The result of the diff-iteration is printed in the `ncs-java-vm.log`:
-
-```java
-class WebSiteServiceRFS {
-
- ....
-
- private final NcsMain main;
-
- public WebSiteServiceRFS(NcsMain main) {
- this.main = main;
- }
-
- @ActionCallback(callPoint="diffcheck", callType=ActionCBType.ACTION)
- public ConfXMLParam[] diffcheck(DpActionTrans trans, ConfTag name,
- ConfObject[] kp, ConfXMLParam[] params)
- throws DpCallbackException {
- try (Maapi maapi3 = new Maapi(main.getAddress())) {
- System.out.println("-------------------");
- System.out.println(params[0]);
- System.out.println(params[1]);
- System.out.println(params[2]);
-
- ConfUInt32 val = (ConfUInt32) params[2].getValue();
- int tid = (int)val.longValue();
-
- maapi3.attach(tid, -1);
-
- maapi3.diffIterate(tid, new MaapiDiffIterate() {
- // Override the Default iterate function in the TestCase class
- public DiffIterateResultFlag iterate(ConfObject[] kp,
- DiffIterateOperFlag op,
- ConfObject oldValue,
- ConfObject newValue,
- Object initstate) {
- System.out.println("path = " + new ConfPath(kp));
- System.out.println("op = " + op);
- System.out.println("newValue = " + newValue);
- return DiffIterateResultFlag.ITER_RECURSE;
-
- }
-
- });
-
-
- maapi3.detach(tid);
-
- return new ConfXMLParam[]{};
- } catch (Exception e) {
- throw new DpCallbackException("diffcheck failed", e);
- }
- }
-}
-```
-
-We are now ready to start the [examples.ncs/service-management/website-service](https://github.com/NSO-developer/nso-examples/tree/6.6/service-management/website-service) example and define our data kicker. Do the following:
-
-```bash
-$ make all
-$ ncs-netsim start
-$ ncs
-$ ncs_cli -C -u admin
-
-admin@ncs# devices sync-from
-sync-result {
- device lb0
- result true
-}
-sync-result {
- device www0
- result true
-}
-sync-result {
- device www1
- result true
-}
-sync-result {
- device www2
- result true
-}
-```
-
-The kickers are defined under the hide-group `debug`. To be able to show and declare kickers, we need first to unhide this hide group:
-
-```cli
-admin@ncs# config
-admin@ncs(config)# unhide debug
-```
-
-We now define a data-kicker for the `profile` list under the service augmented container `/services/properties/wsp:web-site`:
-
-```cli
-admin@ncs(config)# kickers data-kicker a1 \
-> monitor /services/properties/wsp:web-site/profile \
-> kick-node /services/wse:actions action-name diffcheck
-
-admin@ncs(config-data-kicker-a1)# commit
-admin@ncs(config-data-kicker-a1)# top
-admin@ncs(config)# show full-configuration kickers data-kicker a1
-kickers data-kicker a1
- monitor /services/properties/wsp:web-site/profile
- kick-node /services/wse:actions
- action-name diffcheck
-!
-```
-
-We now commit a change in the profile list and we use the `debug kicker` pipe option to be able to follow the kicker invocation:
-
-```cli
-admin@ncs(config)# services properties web-site profile lean lb lb0
-admin@ncs(config-profile-lean)# commit | debug kicker
- 2017-02-15T16:35:36.039 kicker: a1 at /ncs:services/ncs:properties/wsp:web-site/wsp:profile[wsp:name='lean'] changed; invoking diffcheck
-Commit complete.
-
-admin@ncs(config-profile-lean)# top
-admin@ncs(config)# exit
-```
-
-We can also check the result of the action by looking into the `ncs-java-vm.log`:
-
-```cli
-admin@ncs# file show logs/ncs-java-vm.log
-```
-
-In the end, we will find the following printout from the `diffcheck` action:
-
-```
--------------------
-{[669406386|id], a1}
-{[669406386|monitor], /ncs:services/properties/web-site/profile{lean}}
-{[669406386|tid], 168}
-path = /ncs:services/properties/wsp:web-site/profile{lean}
-op = MOP_CREATED
-newValue = null
-path = /ncs:services/properties/wsp:web-site/profile{lean}/name
-op = MOP_VALUE_SET
-newValue = lean
-path = /ncs:services/properties/wsp:web-site/profile{lean}/lb
-op = MOP_VALUE_SET
-newValue = lb0
-[ok][2017-02-15 17:11:59]
-```
-
-## Notification Kicker Concepts
-
-For a notification kicker, the following principles hold:
-
-* Notification Kickers are triggered by the arrival of notifications from any device subscription. These subscriptions are defined under the `/devices/device/notification/subscription` path.
-* Storing the received notifications in CDB is optional and not part of the notification kicker functionality.
-* The ordering of kicker invocations is generally not guaranteed. That is, a kicker triggered at a later time might execute before a kicker that was triggered earlier, and kickers triggered for the same subscription may execute in any order. A `priority` and a `serializer` value can be used to modify this behavior.
-
-### Notification Selector Expression
-
-The notification kicker is defined using a mandatory `selector-expr` which is an XPATH 1.0 expression. When the notification is received a synthetic transaction is started and the notification is written as if it would be stored under the path `/devices/device/notification/received-notifications/data`. Storing the notification in CDB is optional. The `selector-expr` is evaluated with the notification node as the current context and `/` as the root context. For example, if the device model defines a notification like this:
-
-```yang
-module device {
- ...
- notification mynotif {
- leaf message {
- type string;
- }
- }
- ...
-}
-```
-
-The notification node `mynotif` will be the current context for the `selector-expr` There are four predefined variable bindings used when evaluating this expression:
-
-* `DEVICE`: The name of the device emitting the current notification.
-* `SUBSCRIPTION_NAME`: The name of the current subscription from which the notification was received. the kicker
-* `NOTIFICATION_NAME`: The name of the current notification.
-* `NOTIFICATION_NS`: The namespace of the current notification.
-
-The `selector-expr` technique for defining the notification kickers is very flexible. For instance, a kicker can be defined to:
-
-* Receive all notifications for a device.
-* Receive all notifications of a certain type for any device.
-* Receive a subset of notifications of a subset of devices by the use of specific subscriptions with the same name in several devices.
-
-In addition to this usage of the predefined variable bindings, it is possible to further drill down into the specific notification to trigger on certain leafs in the notification.
-
-### Variable Bindings
-
-In addition to the four variable bindings mentioned above, a notification kicker may also be provided with a list of variables (named values). Each variable binding consists of a name and an XPath expression. The XPath expression is evaluated when the selector-expr is run.
-
-```cli
- admin@ncs(config)# set kickers notification-kicker k4
- selector-expr "$NOTIFICATION_NAME=linkUp and address[ip=$IP]"
- kick-node /x/y[id='n1']
- action-name kick-me
- variable IP value '192.168.128.55'
-admin@ncs(config)#
-```
-
-In the example above, `PATH` is defined and referred to by the `monitor` expression by using the expression `$PATH`.
-
-{% hint style="info" %}
-A monitor expression is not evaluated by the XPath engine. Hence no trace of the evaluation can be found in the the XPath log.
-
-Monitor expressions are expanded and installed in an internal data structure at kicker creation/compile time. XPath may be used while defining kickers by referring to a named XPath expression.
-{% endhint %}
-
-### Serializer and Priority Values
-
-These values are used to ensure the order of kicker execution. Priority orders kickers for the same notification event, while serializer orders kickers chronologically for different notification events. By default, when no serializer or priority value is given, kickers may be triggered in any order and in parallel. However, some situations may require stricter ordering, and setting serializer and priority in kicker configuration allows you to achieve it.
-
-If priority for a set of kickers is specified, for each individual notification event, the kickers that match are executed in order, going from priority 0 to 255. For example, kicker `K1` with priority 5 is executed before the kicker `K2` with priority 8, which triggered for the same notification.
-
-Parallel execution of kickers can also result in a situation where a kicker for a notification is executed after the kicker for a later notification. That is, even though the trigger for the first kicker came first, this kicker might have a priority set and must wait for other kickers to execute first, while the kicker for the next notification can execute right away. If there is a dependency between these two kickers, serializer value can ensure chronological ordering.
-
-A serializer is a simple integer value between 0 and 255. Notification kickers configured with the same value will be executed in the order in which they were triggered, relative to each other. For example, suppose there are three kickers configured: `T1` and `T2` with serializer set to 10, and `T3` with serializer of 20. NSO receives two notifications, the first triggering `T1` and `T3`, and the second triggering `T2`. Because of the serializer, NSO guarantees `T1` will be invoked before `T2`. But `T2`, even though it came in later, could potentially be invoked before `T3` because they are not serialized (have different serializer value).
-
-When using both, serializer and priority, only kickers with the same serializer value are priority ordered, that is, serializer value takes precedence. For example, the kicker `Q1` with serializer 10 and priority 15 may execute before or after the kicker `Q2` with serializer 20 and priority 4. The reason is `Q1` may need to wait for other kickers with serializer 10 from previous events. The same is true for `Q2` and previous kickers with serializer 20.
-
-### A Simple Notification Kicker Example
-
-In this example, we use the same action and setup as in the data kicker example above. The procedure for starting is also the same.
-
-The [examples.ncs/service-management/website-service](https://github.com/NSO-developer/nso-examples/tree/6.6/service-management/website-service) example has devices that have notifications generated on the stream "interface". We start with defining the notification kicker for a certain `SUBSCRIPTION_NAME = 'mysub'`. This subscription does not exist for the moment and the kicker will therefore not be triggered:
-
-```cli
-admin@ncs# config
-
-admin@ncs(config)# kickers notification-kicker n1 \
-> selector-expr "$SUBSCRIPTION_NAME = 'mysub'" \
-> kick-node /services/wse:actions \
-> action-name diffcheck
-
-admin@ncs(config-notification-kicker-n1)# commit
-admin@ncs(config-notification-kicker-n1)# top
-
-admin@ncs(config)# show full-configuration kickers notification-kicker n1
-kickers notification-kicker n1
- selector-expr "$SUBSCRIPTION_NAME = 'mysub'"
- kick-node /services/wse:actions
- action-name diffcheck
-!
-```
-
-Now we define the `mysub` subscription on a device `www0` and refer to the notification stream `interface`. As soon as this definition is committed, the kicker will start triggering:
-
-```cli
-admin@ncs(config)# devices device www0 notifications subscription mysub \
-> local-user admin stream interface
-admin@ncs(config-subscription-mysub)# commit
-
-admin@ncs(config-profile-lean)# top
-admin@ncs(config)# exit
-```
-
-If we now inspect the `ncs-java-vm.log`, we will see a number of notifications that are received. We also see that the transaction that is diff-iterated contains the notification as data under the path `/devices/device/notifications/received-notifications/notification/data`. This is an operational data list. However, this transaction is synthetic and will not be committed. If the notification will be stored CDB is optional and not depending on the notification kicker functionality:
-
-```cli
-admin@ncs# file show logs/ncs-java-vm.log
-
--------------------
-{[669406386|id], n1}
-{[669406386|monitor], /ncs:devices/device{www0}/notifications.../data/linkUp}
-{[669406386|tid], 758}
-path = /ncs:devices/device{www0}
-op = MOP_MODIFIED
-newValue = null
-path = /ncs:devices/device{www0}/notifications...
-op = MOP_CREATED
-newValue = null
-path = /ncs:devices/device{www0}/notifications.../event-time
-op = MOP_VALUE_SET
-newValue = 2017-02-15T16:35:36.039204+00:00
-path = /ncs:devices/device{www0}/notifications.../sequence-no
-op = MOP_VALUE_SET
-newValue = 0
-path = /ncs:devices/device{www0}/notifications.../data/notif:linkUp
-op = MOP_CREATED
-newValue = null
-path = /ncs:devices/device{www0}/notifications.../data/notif:linkUp/address{192.168.128.55}
-op = MOP_CREATED
-newValue = null
-path = /ncs:devices/device{www0}/notifications.../data/notif:linkUp/address{192.168.128.55}/ip
-op = MOP_VALUE_SET
-newValue = 192.168.128.55
-path = /ncs:devices/device{www0}/notifications.../data/notif:linkUp/address{192.168.128.55}/mask
-op = MOP_VALUE_SET
-newValue = 255.255.255.0
-path = /ncs:devices/device{www0}/notifications.../data/notif:linkUp/ifName
-op = MOP_VALUE_SET
-newValue = eth2
-path = /ncs:devices/device{www0}/notifications.../data/notif:linkUp/linkProperty{0}
-op = MOP_CREATED
-newValue = null
-path = /ncs:devices/device{www0}/notifications.../data/notif:linkUp/linkProperty{0}/extensions{0}
-op = MOP_CREATED
-newValue = 4668
-path = /ncs:devices/device{www0}/notifications.../data/notif:linkUp/linkProperty{0}/extensions{1}/name
-op = MOP_VALUE_SET
-newValue = 2
-path = /ncs:devices/device{www0}/notifications.../data/notif:linkUp/linkProperty{0}/flags
-op = MOP_VALUE_SET
-newValue = 42
-path = /ncs:devices/device{www0}/notifications.../data/notif:linkUp/linkProperty{0}/newlyAdded
-op = MOP_CREATED
-newValue = null
-```
-
-We end by removing the kicker and the subscription:
-
-```cli
-admin@ncs# config
-admin@ncs(config)# no kickers notification-kicker
-admin@ncs(config)# no devices device www0 notifications subscription
-admin@ncs(config)# commit
-```
-
-## Nano Services Reactive FastMap with Kicker
-
-Nano services use kickers to trigger executing state callback code, run templates, and execute actions according to a plan when pre-conditions are met. For more information see [Nano Services for Provisioning with Side Effects](../core-concepts/implementing-services.md#ncs.development.reactive\_fastmap) and [Nano Services for Staged Provisioning](../core-concepts/nano-services.md).
-
-## Debugging Kickers
-
-### Kicker CLI Debug Target
-
-To find out why a Kicker kicked when it shouldn't or more commonly and annoying, why it didn't kick when it should, use the CLI pipe `debug kicker`.
-
-Evaluation of potential Kicker invocations are reported in the CLI together with XPath evaluation results:
-
-```cli
-admin@ncs(config)# set sys ifc port-0 hw mtu 8000
-admin@ncs(config)# commit | debug kicker
- 2017-02-15T16:35:36.039 kicker: k1 at /kicker_example:sys/kicker_example:ifc[kicker_example:name='port-0'] changed;
-not invoking 'kick-me' trigger-expr false -> false
-Commit complete.
-admin@ncs(config)#
-```
-
-### Unhide Kickers
-
-The top-level container `kickers` is by default invisible due to a hidden attribute. To make `kickers` visible in the CLI, two steps are required.
-
-1. First, the following XML snippet must be added to `ncs.conf`.
-
- ```xml
-
- debug
-
- ```
-
-2. Next, the `unhide` command can be used in the CLI session.
-
- ```cli
- admin@ncs(config)# unhide debug
- admin@ncs(config)#
- ```
-
-### XPath Log
-
-Detailed information from the XPath evaluator can be enabled and made available in the xpath log. Add the following snippet to `ncs.conf`.
-
-```xml
-
- true
- ./xpath.trace
-
-```
-
-### Devel Log
-
-Error information is written in the development log. The development log is meant to be used as support while developing the application. It is enabled in `ncs.conf`:
-
-{% code title="Enabling the Developer Log" %}
-```xml
-
- true
-
- ./logs/devel.log
- true
-
-
-trace
-```
-{% endcode %}
diff --git a/development/advanced-development/progress-trace.md b/development/advanced-development/progress-trace.md
deleted file mode 100644
index 8b8e5340..00000000
--- a/development/advanced-development/progress-trace.md
+++ /dev/null
@@ -1,309 +0,0 @@
----
-description: Gather useful information for debugging and troubleshooting.
----
-
-# Progress Trace
-
-Progress tracing in NSO provides developers with useful information for debugging, diagnostics, and profiling. This information can be used both during development cycles and after the release of the software. The system overhead for progress tracing is usually negligible.
-
-When a transaction or action is applied, NSO emits progress events. These events can be displayed and recorded in a number of different ways. The easiest way is to pipe an action to details in the CLI.
-
-```bash
-admin@ncs% commit | details
-Possible completions:
- debug verbose very-verbose
-admin@ncs% commit | details
-```
-
-As seen by the details output, all events are recorded with a timestamp and in some cases with the duration. All phases of the transaction, service, and device communication are printed.
-
-```
-applying transaction for running datastore usid=41 tid=1761 trace-id=d7f06482-41ad-4151-938d-7a8bc7b3ce33
-entering validate phase
- 2021-05-25T17:28:12.267 taking transaction lock... ok (0.000 s)
- 2021-05-25T17:28:12.267 holding transaction lock...
- 2021-05-25T17:28:12.268 creating rollback file... ok (0.004 s)
- 2021-05-25T17:28:12.272 run transforms and transaction hooks...
- 2021-05-25T17:28:12.273 run pre-transform validation... ok (0.000 s)
- 2021-05-25T17:28:12.275 service-manager: service /ordserv[name='o2']: run service... ok (0.035 s)
- 2021-05-25T17:28:12.311 run transforms and transaction hooks: ok (0.038 s)
- 2021-05-25T17:28:12.311 mark inactive... ok (0.000 s)
- 2021-05-25T17:28:12.311 pre validate... ok (0.000 s)
- 2021-05-25T17:28:12.311 run validation over the changeset... ok (0.000 s)
- 2021-05-25T17:28:12.312 run dependency-triggered validation... ok (0.000 s)
- 2021-05-25T17:28:12.312 check configuration policies... ok (0.000 s)
-leaving validate phase (0.045 s)
-entering write-start phase
- 2021-05-25T17:28:12.312 cdb: write-start
- 2021-05-25T17:28:12.313 check data kickers... ok (0.000 s)
-leaving write-start phase (0.001 s)
-entering prepare phase
- 2021-05-25T17:28:12.314 cdb: prepare
- 2021-05-25T17:28:12.314 device-manager: prepare
-leaving prepare phase (0.003 s)
-entering commit phase
- 2021-05-25T17:28:12.317 cdb: commit
- 2021-05-25T17:28:12.318 service-manager: commit
- 2021-05-25T17:28:12.318 device-manager: commit
- 2021-05-25T17:28:12.320 holding transaction lock: ok (0.033 s)
-leaving commit phase (0.002 s)
-applying transaction for running datastore usid=41 tid=1761 trace-id=d7f06482-41ad-4151-938d-7a8bc7b3ce33 (0.053 s)
-```
-
-Some actions (usually those involving device communication) also produce progress data.
-
-```cli
-admin@ncs% request devices device ce0 sync-from dry-run | details very-verbose
-running action /devices/device\[name='ce0'\]/sync-from usid=41 tid=1800 trace-id=fff4d4b0-5688-42f9-b5f7-53b7c3f70d35
- 2021-05-25T17:31:31.222 device ce0: sync-from...
- 2021-05-25T17:31:31.222 device ce0: taking device lock... ok (0.000 s)
- 2021-05-25T17:28:12.267 device ce0: holding device lock...
- 2021-05-25T17:31:31.227 device ce0: connect... ok (0.013 s)
- 2021-05-25T17:31:31.240 device ce0: show... ok (0.001 s)
- 2021-05-25T17:31:31.242 device ce0: get-trans-id... ok (0.000 s)
- 2021-05-25T17:31:31.242 device ce0: close... ok (0.000 s)
-...
- 2021-05-25T17:28:12.320 device ce0: holding device lock: ok (0.033 s)
- 2021-05-25T17:31:31.249 device ce0: sync-from: ok (0.026 s)
-running action /devices/device\[name='ce0'\]/sync-from usid=41 tid=1800 trace-id=fff4d4b0-5688-42f9-b5f7-53b7c3f70d35 (0.053 s)
-```
-
-## Configuring Progress Trace
-
-The pipe details in the CLI are useful during development cycles of, for example, a service, but not as useful when tracing calls from other northbound interfaces or events in a released running system. Then it's better to configure a progress trace to be outputted to a file or operational data, which can be retrieved through a northbound interface.
-
-### Unhide Progress Trace
-
-The top-level container `progress` is by default invisible due to a hidden attribute. To make `progress` visible in the CLI, two steps are required:
-
-1. First, the following XML snippet must be added to `ncs.conf`:
-
- ```xml
-
- debug
-
- ```
-2. Then, the `unhide` command is used in the CLI session:
-
- ```cli
- admin@ncs% unhide debug
- ```
-
-### Log to File
-
-Progress data can be outputted to a given file. This is useful when the data is to be analyzed in some third-party software like a spreadsheet application.
-
-```bash
-admin@ncs% set progress trace test destination file event.csv format csv
-```
-
-The file can be formatted as a comma-separated values file defined by RFC 4180 or in a pretty printed log file with each event on a single line.
-
-The location of the file is the directory of `/ncs-config/logs/progress-trace/dir` in `ncs.conf`.
-
-### Log as Operational Data
-
-When the data is to be retrieved through a northbound interface, it is more useful to output the progress events as operational data.
-
-```bash
-admin@ncs% set progress trace test destination oper-data
-```
-
-This will log non-persistent operational data to the `/progress:progress/trace/event` list. As this list might grow rapidly there is a maximum size of it (defaults to 1000 entries). When the maximum size is reached, the oldest list entry is purged.
-
-```bash
-admin@ncs% set progress trace test max-size 2000
-```
-
-Using the `/progress:progress/trace/purge` action the event list can be purged.
-
-```bash
-admin# request progress trace test purge
-```
-
-### Log as Notification Events
-
-Progress events can be subscribed to as Notifications events. See [NOTIF API](../core-concepts/api-overview/java-api-overview.md#ug.java_api_overview.notif) for further details.
-
-### Verbosity
-
-The `verbosity` parameter is used to control the level of output. The following levels are available:
-
-
Level
Description
normal
Informational messages that highlight the progress of the system at a coarse-grained level. Used mainly to give a high-level overview. This is the default and the lowest verbosity level.
verbose
Detailed informational messages from the system. The various service and device phases and their duration will be traced. This is useful to get an overview of where time is spent in the system.
very-verbose
Very detailed informational messages from the system and its internal operations.
debug
The highest verbosity level with fine-grained informational messages usable for debugging the system and its internal operations. Internal system transactions as well as data kicker evaluation and CDB subscribers will traced. Setting this level could result in a large number of events being generated.
-
-Additional debug tracing can be turned on for various parts. These are consciously left out of the normal debug level due to the high amount of output and should only be turned on during development.
-
-### Using Filters
-
-By default, all transaction and action events with the given verbosity level will be logged. To get a more selective choice of events, filters can be used.
-
-```bash
-admin@ncs% show progress trace filter
-Possible completions:
- all-devices - Only log events for devices.
- all-services - Only log events for services.
- context - Only log events for the specified context.
- device - Only log events for the specified device(s).
- device-group - Only log events for devices in this group.
- local-user - Only log events for the specified local user.
- service-type - Only log events for the specified service type.
-```
-
-The context filter can be used to only log events that originate through a specific northbound interface. The context is either one of `netconf`, `cli`, `webui`, `snmp`, `rest`, `system` or it can be any other context string defined through the use of MAAPI.
-
-```bash
-admin@ncs% set progress trace test filter context netconf
-```
-
-## Report Progress Events from User Code
-
-API methods to report progress events exist for Python, Java, Erlang, and C.
-
-### Python `ncs.maapi` Example
-
-```python
-class ServiceCallbacks(Service):
- @Service.create
- def cb_create(self, tctx, root, service, proplist):
- maapi = ncs.maagic.get_maapi(root)
- trans = maapi.attach(tctx)
-
- with trans.start_progress_span("service create()", path=service._path):
- ipv4_addr = None
- with trans.start_progress_span("allocate IP address") as sp11:
- self.log.info('alloc trace-id: ' + sp11.trace_id + \
- ' span-id: ' + sp11.span_id)
- ipv4_addr = alloc_ipv4_addr('192.168.0.0', 24)
- trans.progress_info('got IP address ' + ipv4_addr)
- with trans.start_progress_span("apply template",
- attrs={'ipv4_addr':ipv4_addr}) as sp12:
- self.log.info('templ trace-id: ' + sp12.trace_id + \
- ' span-id: ' + sp12.span_id)
- vars = ncs.template.Variables()
- vars.add('IPV4_ADDRESS', ipv4_addr)
- template = ncs.template.Template(service)
- template.apply('ipv4-addr-template', vars)
-```
-
-Further details can be found in the NSO Python API reference under `ncs.maapi.start_progress_span` and `ncs.maapi.progress_info`.
-
-### Java `com.tailf.progress.ProgressTrace` Example
-
-```java
- @ServiceCallback(servicePoint="...",
- callType=ServiceCBType.CREATE)
- public Properties create(ServiceContext context,
- NavuNode service,
- NavuNode ncsRoot,
- Properties opaque)
- throws DpCallbackException {
- try {
- Maapi maapi = service.context().getMaapi();
- int tid = service.context().getMaapiHandle();
- ProgressTrace progress = new ProgressTrace(maapi, tid,
- service.getConfPath());
- Span sp1 = progress.startSpan("service create()");
-
- Span sp11 = progress.startSpan("allocate IP address");
- LOGGER.info("alloc trace-id: " + sp11.getTraceId() +
- " span-id: " + sp11.getSpanId());
- String ipv4Addr = allocIpv4Addr("192.168.0.0", 24);
- progress.event("got IP address " + ipv4Addr);
- progress.endSpan(sp11);
-
- Attributes attrs = new Attributes();
- attrs.set("ipv4_addr", ipv4Addr);
- Span sp12 = progress.startSpan(Maapi.Verbosity.NORMAL,
- "apply template", attrs, null);
- LOGGER.info("templ trace-id: " + sp12.getTraceId() +
- " span-id: " + sp12.getSpanId());
- TemplateVariables ipVar = new TemplateVariables();
- ipVar.putQuoted("IPV4_ADDRESS", ipv4Addr);
- Template ipTemplate = new Template(context, "ipv4-addr-template");
- ipTemplate.apply(service, ipVar);
- progress.endSpan(sp12);
-
- progress.endSpan(sp1);
-```
-
-Further details can be found in the NSO Java API reference under `com.tailf.progress.ProgressTrace` and `com.tailf.progress.Span`.
-
-## Correlating with OpenTelemetry Traces
-
-[OpenTelemetry](https://opentelemetry.io/) is an observability SDK that instruments your code and libraries to collect telemetry data. NSO 6.3 and later by default generate span IDs that are compatible with W3C Trace Context and OpenTelemetry.
-
-To simplify correlation of telemetry data when your NSO code uses libraries that are instrumented with OpenTelemetry, you can propagate parent span information from NSO to those libraries. To make the most use of this data, you need to export OpenTelemetry and NSO spans to a common system. You can export NSO span data with the Observability Exporter package.
-
-To set up the trace context for OpenTelemetry:
-
-1. Create a new NSO span to obtain a span ID `span_id`.
-2. Create an OpenTelemetry span with the `span_id`.
-3. Set the OpenTelemetry span as the current span for the OpenTelemetry `Context` of the execution unit.
-
-The following listing shows the code necessary to achieve this in Python. It requires the `opentelemetry-api` package.
-
-```python
- @Service.create
- def cb_create(self, tctx, root, service, proplist):
- maapi = ncs.maagic.get_maapi(root)
- trans = maapi.attach(tctx)
-
- with trans.start_progress_span(
- "service create()",
- path=service._path
- ) as parent_span:
- import opentelemetry.context
- import opentelemetry.trace as otr
- span_ctx = otr.SpanContext(
- trace_id=int(parent_span.trace_id, 16),
- span_id=int(parent_span.span_id, 16),
- is_remote=False,
- trace_flags=otr.TraceFlags(otr.TraceFlags.SAMPLED)
- )
- otel_span = otr.NonRecordingSpan(span_ctx)
- otel_ctx = otr.set_span_in_context(otel_span)
- opentelemetry.context.attach(otel_ctx)
-
- ... # code with OpenTelemetry tracing
-```
-
-The code uses OpenTelemetry tracing from the service create callback; however, you can use the same approach in any Maapi session.
-
-For example, if your code uses Python `requests` package, you can easily instrument it by adding an additional `opentelemetry.instrumentation.requests` package:
-
-```python
-import requests
-from opentelemetry.instrumentation.requests import RequestsInstrumentor
-
-RequestsInstrumentor().instrument()
-```
-
-If you now invoke `requests` from service code as shown in the following snippet, it will produce OpenTelemetry spans, where top-most spans have parent `span-id` set to the service span produced by NSO, as well as a matching trace ID.
-
-```python
- ... # code with OpenTelemetry tracing
- response = requests.get(url="https://www.cisco.com/")
-```
-
-```json
-{
- "name": "GET",
- "context": {
- "trace_id": "0xd02769f6e5ce0dea81fe3b61644b5571",
- "span_id": "0x6de7e48e83dc1b13",
- "trace_state": "[]"
- },
- "kind": "SpanKind.CLIENT",
- "parent_id": "0x749a311a41fe9ba6",
- "start_time": "2024-06-14T09:57:30.488761Z",
- "end_time": "2024-06-14T09:57:31.290909Z",
- "status": {
- "status_code": "UNSET"
- },
- "attributes": {
- "http.method": "GET",
- "http.url": "https://www.cisco.com/",
- "http.status_code": 200
- }
-}
-```
diff --git a/development/advanced-development/scaling-and-performance-optimization.md b/development/advanced-development/scaling-and-performance-optimization.md
deleted file mode 100644
index bd0b19e3..00000000
--- a/development/advanced-development/scaling-and-performance-optimization.md
+++ /dev/null
@@ -1,790 +0,0 @@
----
-description: Optimize NSO for scaling and performance.
----
-
-# Scaling and Performance Optimization
-
-With an increasing number of services and managed devices in NSO, performance becomes a more important aspect of the system. At the same time, other aspects, such as the way you organize code, also start playing an important role when using NSO on a bigger scale.
-
-The following section examines these concerns and presents the available options for scaling your NSO automation solution.
-
-## Understanding Your Use Case
-
-NSO allows you to tackle different automation challenges and every solution has its own specifics. Therefore, the best approach to scaling depends on the way the solution is implemented. What works in one case may be useless, or effectively degrade performance, for another. You must first analyze and understand how your particular use case behaves, which will then allow you to take the right approach to scaling.
-
-When trying to improve the performance, a very good, possibly even the best starting point is to inspect the tracing data. Tracing is further described in [Progress Trace](progress-trace.md). Yet a simple `commit | details` command already provides a lot of useful data.
-
-{% code title="Example Progress Trace Output for a Service" %}
-```cli
-admin@ncs(config-mysvc-test)# commit | details
- 2022-09-16T09:17:48.977 applying transaction...
-entering validate phase for running usid=54 tid=225 trace-id=3a4a3b7f-a09f-4f9d-b05e-1656310ea5b6
- 2022-09-16T09:17:48.977 creating rollback checkpoint... ok (0.000 s)
- 2022-09-16T09:17:48.978 creating rollback file... ok (0.004 s)
- 2022-09-16T09:17:48.983 creating pre-transform checkpoint... ok (0.000 s)
- 2022-09-16T09:17:48.983 run pre-transform validation... ok (0.000 s)
- 2022-09-16T09:17:48.983 creating transform checkpoint... ok (0.000 s)
- 2022-09-16T09:17:48.983 run transforms and transaction hooks...
- 2022-09-16T09:17:48.985 taking service write lock... ok (0.000 s)
- 2022-09-16T09:17:48.985 holding service write lock...
- 2022-09-16T09:17:48.986 service /mysvc[name='test']: run service... ok (0.012 s)
- 2022-09-16T09:17:48.999 run transforms and transaction hooks: ok (0.016 s)
- 2022-09-16T09:17:48.999 creating validation checkpoint... ok (0.000 s)
- 2022-09-16T09:17:49.000 mark inactive... ok (0.000 s)
- 2022-09-16T09:17:49.001 pre validate... ok (0.000 s)
- 2022-09-16T09:17:49.001 run validation over the changeset... ok (0.000 s)
- 2022-09-16T09:17:49.002 run dependency-triggered validation... ok (0.000 s)
- 2022-09-16T09:17:49.003 check configuration policies... ok (0.000 s)
- 2022-09-16T09:17:49.003 check for read-write conflicts... ok (0.000 s)
- 2022-09-16T09:17:49.004 taking transaction lock... ok (0.000 s)
- 2022-09-16T09:17:49.004 holding transaction lock...
- 2022-09-16T09:17:49.004 check for read-write conflicts... ok (0.000 s)
- 2022-09-16T09:17:49.004 applying service meta-data... ok (0.000 s)
-leaving validate phase for running usid=54 tid=225 trace-id=3a4a3b7f-a09f-4f9d-b05e-1656310ea5b6 (0.028 s)
-entering write-start phase for running usid=54 tid=225 trace-id=3a4a3b7f-a09f-4f9d-b05e-1656310ea5b6
- 2022-09-16T09:17:49.005 cdb: write-start
- 2022-09-16T09:17:49.006 ncs-internal-service-mux: write-start
- 2022-09-16T09:17:49.006 ncs-internal-device-mgr: write-start
- 2022-09-16T09:17:49.007 cdb: match subscribers... ok (0.000 s)
- 2022-09-16T09:17:49.007 cdb: create pre commit running... ok (0.000 s)
- 2022-09-16T09:17:49.007 cdb: write changeset... ok (0.000 s)
- 2022-09-16T09:17:49.008 check data kickers... ok (0.000 s)
-leaving write-start phase for running usid=54 tid=225 trace-id=3a4a3b7f-a09f-4f9d-b05e-1656310ea5b6 (0.003 s)
-entering prepare phase for running usid=54 tid=225 trace-id=3a4a3b7f-a09f-4f9d-b05e-1656310ea5b6
- 2022-09-16T09:17:49.009 cdb: prepare
- 2022-09-16T09:17:49.009 ncs-internal-device-mgr: prepare
- 2022-09-16T09:17:49.022 device ex1: push configuration...
-leaving prepare phase for running usid=54 tid=225 trace-id=3a4a3b7f-a09f-4f9d-b05e-1656310ea5b6 (0.121 s)
-entering commit phase for running usid=54 tid=225 trace-id=3a4a3b7f-a09f-4f9d-b05e-1656310ea5b6
- 2022-09-16T09:17:49.130 cdb: commit
- 2022-09-16T09:17:49.130 cdb: switch to new running... ok (0.000 s)
- 2022-09-16T09:17:49.132 ncs-internal-device-mgr: commit
- 2022-09-16T09:17:49.149 device ex1: push configuration: ok (0.126 s)
- 2022-09-16T09:17:49.151 holding service write lock: ok (0.166 s)
- 2022-09-16T09:17:49.151 holding transaction lock: ok (0.147 s)
-leaving commit phase for running usid=54 tid=225 trace-id=3a4a3b7f-a09f-4f9d-b05e-1656310ea5b6 (0.021 s)
- 2022-09-16T09:17:49.151 applying transaction: ok (0.174 s)
-Commit complete.
-admin@ncs(config-mysvc-test)#
-```
-{% endcode %}
-
-Pay attention to the time NSO spends doing specific tasks. For a simple service, these are mainly:
-
-* Validate service data (pre-transform validation)
-* Run service mapping logic
-* Validate produced configuration (changeset)
-* Push changes to affected devices
-* Commit the new configuration
-
-Tracing data can often quickly reveal a bottleneck, a hidden delay, or some other unexpected inefficiency in your code. The best strategy is to first address any such concerns if they show up since only well-performing code is a good candidate for further optimization. Otherwise, you might find yourself optimizing the wrong parameters and hitting a dead end. Visualizing the progress trace is often helpful in identifying bottlenecks. See [Measuring Transaction Throughput](scaling-and-performance-optimization.md#ncs.development.scaling.throughput.measure).
-
-Analyzing the service in isolation can yield useful insight. But it may also lead you in the wrong direction because some issues only manifest under load and the data from a live system can surprise you. That is why NSO supports different ways of exposing tracing information, including operational data and notification events. Remember to always verify that your observations and assumptions hold for a live, production system, too.
-
-## Where to Start?
-
-The times for different parts of the transaction, as reported by the tracing data, are very useful in determining where to focus your efforts.
-
-For example, if your service data model uses a very broad `must` or similar XPath statement, then NSO may potentially need to evaluate thousands of data entries. Such evaluation requires a considerable amount of additional processing and is, in turn, reflected in increased time spent in validation. The solution in this case is to limit the scope of the data referenced in the YANG constraint, which you can often achieve with a more specific XPath expression.
-
-Similarly, if a significant amount of time is spent constructing a service mapping, perhaps there is some redundant work occurring that you could optimize? Sometimes, however, provisioning requires calls to other systems or some computationally expensive operation, which you cannot easily manage without. Then you might want to consider splitting the provisioning process into smaller pieces, using nano services, for example. See [Simplify the Per-Device Concurrent Transaction Creation Using a Nano Service](scaling-and-performance-optimization.md#ncs.development.scaling.throughput.nano) for an example use-case and references to the Nano service documentation.
-
-In general, your own code for a single transaction with no additional load on NSO should execute quickly (sub-second, as a rule of thumb). The faster each service or action code is, the better the overall system performance. Using a service design pattern to both improve performance and scale and avoid conflicts is described in [Design to Minimize Conflicts](scaling-and-performance-optimization.md#ncs.development.scaling.throughput.conflicts).
-
-## Divide the Work Correctly
-
-Things such as reading external data or large computations should not be done inside the create code. Consider using an action to encapsulate these functions. An action does not run under the lock unless it triggers a transaction and can perform side effects as desired.
-
-There are several ways to utilize an action:
-
-* An action is allowed to perform side effects.
-* An action can read operational data from devices or external systems.
-* An action can write values to operational data in CDB, for later use from the service.
-* An action can write configuration to CDB, potentially triggering a service.
-
-Actions can be used together with nano services, see [Simplify the Per-Device Concurrent Transaction Creation Using a Nano Service](scaling-and-performance-optimization.md#ncs.development.scaling.throughput.nano).
-
-## Optimizing Device Communication
-
-With the default configuration, one of the first things you might notice standing out in the tracing data is that pushing device configuration takes a significant amount of time compared to other parts of service provisioning. Why is that?
-
-All changes in NSO happen inside a transaction. Network devices participate in the transaction, which gives you the all-or-nothing behavior, to ensure correctness and consistency across the network. But network communication is not instantaneous and a transaction in NSO holds a lock while waiting for devices to process the change. This way, changes to network devices are serialized, even when there are multiple simultaneous transactions. However, a lock blocks other transactions from proceeding, ultimately limiting the overall NSO transaction rate.
-
-So, in many cases, the NSO system is not really resource-constrained but merely experiencing lock contention. Therefore, making locks as short as possible is the best way to improve performance. In the example trace from the section [Understanding Your Use Case](scaling-and-performance-optimization.md#ncs.development.scaling.tracing), most of the time is spent in the prepare phase, where configuration changes are propagated to the network devices. Change propagation requires a management session with each participating device, as well as updating and validating the new configuration on the device side. Understandably, all of these tasks take time.
-
-NSO allows you to influence this behavior. Take a look at [Commit Queue](../../operation-and-usage/operations/nso-device-manager.md#user_guide.devicemanager.commit-queue) on how to avoid long device locks with commit queues and the trade-offs they bring. Usually, enabling the commit queue feature is the first and the most effective step to significantly improving transaction times.
-
-## Improving Subscribers
-
-The CDB subscriber mechanism is used to notify the application code about CDB changes and runs at the end of the transaction commit, inside a global lock. Due to this fact, the number and configuration of subscribers affect performance and should be investigated early in your performance optimization efforts.
-
-A badly implemented subscriber prolongs the time the transaction holds the lock, preventing other transactions from completing, in addition to the original transaction taking more time to commit. There are mainly two reasons for suboptimal operation: either the subscriber is too broad and must process too many (irrelevant) changes, or it performs more work inside the lock as necessary. As a recommended practice, the subscriber should only note the changes and schedule the processing to be done later, in order to return and release the lock as quickly as possible.
-
-Moreover, subscribers incur processing overhead regardless of their implementation because NSO needs to communicate with the custom subscriber code, typically written in Java or Python.
-
-That is why modern, performant code in NSO should use the kicker mechanism instead of implementing custom subscribers. While it is still possible to create a badly performing kicker, you are less likely to do so inadvertently. In most situations, kickers are also easier to implement and troubleshoot. You can read more on kickers in [Kicker](kicker.md).
-
-## Minimizing Concurrency Conflicts
-
-The time it takes to complete a transaction is certainly an important performance metric. However, after a certain point, it gets increasingly hard or even impossible to get meaningful improvement from optimizing each individual transaction. As it turns out, on a busy system, there are usually multiple outstanding requests. So, instead of trying to process each as fast as possible one after another, the system might process them in parallel.
-
-
Running Transactions Sequentially and in Parallel
-
-In practice and as the figure shows, some parts must still be processed sequentially to ensure transactional properties. However, there is a significant gain in the overall time it takes to process all transactions in a busy system, even though each might take a little longer individually due to the concurrency overhead.
-
-Throughput then becomes a more relevant metric. It is the number of requests or transactions that the system can process in a given time unit. While throughput is still related to individual transaction times, other factors also come into play. An important one is the way in which NSO implements concurrency and the interaction between the transaction system and your, user, code. Designing for transaction throughput is covered in detail later in this section, and the NSO concurrency model is detailed in [NSO Concurrency Model](../core-concepts/nso-concurrency-model.md).
-
-The section provides guidance on identifying transaction conflicts and what affects their occurrence, so you can make your code more resistant to producing them. Conflicts arise more frequently on busier systems and negatively affect throughput, which makes them a good candidate for optimization.
-
-## Fine-tuning the Concurrency Parameters
-
-Depending on the specifics of the server running NSO, additional performance improvement might be possible by fine-tuning the `transaction-limits` set of configuration parameters in `ncs.conf`. Please see the ncs.conf(1) manpage for details.
-
-## Enabling Even More Parallelism
-
-If you are experiencing high resource utilization, such as memory and CPU usage, while individual transactions are optimized to execute fast and the rate of conflicts is low, it's possible you are starting to see the level of demand that pushes the limits of this system.
-
-First, you should try adding more resources, in a scale-up manner, if possible. At the same time, you might also have some services that are using an older, less performant user code execution model. For example, the way Python code is executed is controlled by the callpoint-model option, described in [The `application` Component](../core-concepts/nso-virtual-machines/nso-python-vm.md#ncs.development.pythonvm.cthread), which you should ensure is set to the most performant setting.
-
-Regardless, a single system cannot scale indefinitely. After you have exhausted all other options, you will need to “scale out,” that is, split the workload across multiple NSO instances. You can achieve this by using the Layered Service Architecture (LSA) approach. But the approach has its trade-offs, so make sure it provides the right benefits in your case. The LSA is further documented in [LSA Overview](../../administration/advanced-topics/layered-service-architecture.md) in Layered Service Architecture.
-
-## Limit **`sync-from`**
-
-In a brownfield environment, where the configuration is not 100% automated and controlled by NSO alone but also written to by other systems or operators, NSO is bound to end up out-of-sync with the device. How to handle synchronization is a big topic, and it is vital to understand what it means to you when things are out of sync. This will help guide your strategy.
-
-If NSO is frequently brought out of sync, it can be tempting to invoke `sync-from` from the create callback. While it does achieve a higher degree of reliability in the sense that service modifications won't return an out-of-sync error, the impact on performance is usually catastrophic. The typical `sync-from` operation takes orders of magnitudes longer than the typical service modification, and transactional throughput will suffer greatly.
-
-But other alternatives are often better:
-
-* You can synchronize the configuration from the device when it reports a change rather than when the service is modified by listening for configuration change events from the device, e.g., via RESTCONF or NETCONF notifications, SNMP traps, or Syslog, and invoking `sync-from` or `partial-sync-from` when another party (not NSO) has modified the device. See also the section called [Partial Sync](developing-services/services-deep-dive.md#ch_svcref.partialsync).
-* Using the `devices sync-from` command does not hold the transaction lock and run across devices concurrently, which reduces the total amount of time spent time synchronizing. This is particularly useful for periodic synchronization to lower the risk of being out-of-sync when committing configuration changes.
-* Using the `no-overwrite` commit flag, you can be more lax about being in sync and focus on not overwriting the modified configuration.
-* If the configuration is 100% automated and controlled by NSO alone, using `out-of-sync-behaviour accept`, you can completely ignore if the device is in sync or not.
-* Letting your modification fail with an out-of-sync error and handling that error at the calling side.
-
-## Designing for Maximal Transaction Throughput
-
-Maximal transaction throughput refers to the maximum number of transactions a system can handle within a given period. Factors that can influence maximal transaction throughput include:
-
-* Hardware capabilities (e.g., processing power, memory).
-* Software efficiency.
-* Network bandwidth.
-* The complexity of the transactions themselves.
-
-Besides making sure the system hardware capabilities and network bandwidth are not a bottleneck, there are four areas where the NSO user can significantly affect the transaction throughput performance for an NSO node:
-
-* Run multiple transactions concurrently. For example, multiple concurrent RESTCONF or NETCONF edits, CLI commits, MAAPI `apply()`, nano service re-deploy, etc.
-* Design to avoid conflicts and minimize the service `create()` and validation implementation. For example, in service templates and code mapping to devices or other service instances, YANG `must` statements with XPath expressions or validation code.
-* Using commit queues to exclude the time to push configuration changes to devices from inside the transaction lock.
-* Simplify using nano and stacked services. If the processor where NSO with a stacked service runs becomes a severe bottleneck, the added complexity of migrating the stacked service to an LSA setup can be motivated. LSA helps expose only a single service instance when scaling up the number of devices by increasing the number of available CPU cores beyond a single processor.
-
-
Designing for Maximal Transaction Throughput
-
-### Measuring Transaction Throughput
-
-Measuring transaction performance includes measuring the total wall-clock time for the service deployment transaction(s) and using the detailed NSO progress trace of the transactions to find bottlenecks. The developer log helps debug the NSO internals, and the XPath trace log helps find misbehaving XPath expressions used in, for example, YANG `must` statements.
-
-The picture below shows a visualization of the NSO progress trace when running a single transaction for two service instances configuring a device each:
-
-
-
-The total RESTCONF edit took \~5 seconds, and the service mapping (“creating service” event) and validation (“run validation ...” event) were done sequentially for the service instances and took 2 seconds each. The configuration push to the devices was done concurrently in 1 second.
-
-For progress trace documentation, see [Progress Trace](progress-trace.md).
-
-### Running the `perf-trans` Example Using a Single Transaction
-
-The [examples.ncs/scaling-performance/perf-trans](https://github.com/NSO-developer/nso-examples/tree/6.6/scaling-performance/perf-trans) example from the NSO example set explores the opportunities to improve the wall-clock time performance and utilization, as well as opportunities to avoid common pitfalls.
-
-The example uses simulated CPU loads for service creation and validation work. Device work is simulated with `sleep()` as it will not run on the same processor in a production system.
-
-The example shows how NSO can benefit from running many transactions concurrently if the service and validation code allow concurrency. It uses the NSO progress trace feature to get detailed timing information for the transactions in the system.
-
-The provided code sets up an NSO instance that exports tracing data to a `.csv` file, provisions one or more service instances, which each map to a device, and shows different (average) transaction times and a graph to visualize the sequences plus concurrency.
-
-Play with the [examples.ncs/scaling-performance/perf-trans](https://github.com/NSO-developer/nso-examples/tree/6.6/scaling-performance/perf-trans) example by tweaking the `measure.py` script parameters:
-
-```code
-plain patch
-```
-
-See the README in the [examples.ncs/scaling-performance/perf-trans](https://github.com/NSO-developer/nso-examples/tree/6.6/scaling-performance/perf-trans) example for details.
-
-To run the [examples.ncs/scaling-performance/perf-trans](https://github.com/NSO-developer/nso-examples/tree/6.6/scaling-performance/perf-trans) example from the NSO example set and recreate the variant shown in the progress trace above:
-
-```bash
-cd $NCS_DIR/examples.ncs/scaling-performance/perf-trans
-make NDEVS=2 python
-python3 measure.py --ntrans 1 --nwork 2 --ndtrans 2 --cqparam bypass --ddelay 1
-python3 ../common/simple_progress_trace_viewer.py $(ls logs/*.csv)
-```
-
-The following is a sequence diagram and the progress trace of the example, describing the transaction `t1`. The transaction deploys service configuration to the devices using a single RESTCONF `patch` request to NSO and then NSO configures the netsim devices using NETCONF:
-
-```
-RESTCONF service validate push config
-patch create config ndtrans=2 netsim
-ntrans=1 nwork=2 nwork=2 cqparam=bypass device ddelay=1
- t1 ------> 2s -----> 2s -----------------------> ex0 -----> 1s
- \------------> ex1 -----> 1s
- wall-clock 2s 2s 1s = 5s
-```
-
-The only part running concurrently in the example above was configuring the devices. It is the most straightforward option if transaction throughput performance is not a concern or the service creation and validation work are insignificant. A single transaction service deployment will not need to use commit queues as it is the only transaction holding the transaction lock configuring the devices inside the critical section. See the “holding transaction lock” event in the progress trace above.
-
-Stop NSO and the netsim devices:
-
-```bash
-make stop
-```
-
-### Concurrent Transactions
-
-Everything from smartphones and tablets to laptops, desktops, and servers now contain multi-core processors. For maximal throughput, the powerful multi-core systems need to be fully utilized. This way, the wall clock time is minimized when deploying service configuration changes to the network, which is usually equated with performance. Therefore, enabling NSO to spread as much work as possible across all available cores becomes important. The goal is to have service deployments maximize their utilization of the total available CPU time to deploy services faster to the users who ordered them.
-
-Close to full utilization of every CPU core when running under maximal load, for example, ten transactions to ten devices, is ideal, as some process viewer tools such as `htop` visualize with meters:
-
-```
- 0[|||||||||||||||||||||||||||||||||||||||||||||||||100.0%]
- 1[|||||||||||||||||||||||||||||||||||||||||||||||||100.0%]
- 2[||||||||||||||||||||||||||||||||||||||||||||||||||99.3%]
- 3[||||||||||||||||||||||||||||||||||||||||||||||||||99.3%]
- 4[||||||||||||||||||||||||||||||||||||||||||||||||||99.3%]
- 5[||||||||||||||||||||||||||||||||||||||||||||||||||99.3%]
- 6[||||||||||||||||||||||||||||||||||||||||||||||||||98.7%]
- 7[||||||||||||||||||||||||||||||||||||||||||||||||||98.7%]
- 8[||||||||||||||||||||||||||||||||||||||||||||||||||98.7%]
- 9[||||||||||||||||||||||||||||||||||||||||||||||||||98.7%]
- ...
-```
-
-One transaction per RFS instance and device will allow each NSO transaction to run on a separate core concurrently. Multiple concurrent RESTCONF or NETCONF edits, CLI commits, MAAPI `apply()`, nano service re-deploy, etc. Keep the number of running concurrent transactions equal to or below the number of cores available in the multi-core processor to avoid performance degradation due to increased contention on system internals and resources. NSO helps by limiting the number of transactions applying changes in parallel to, by default, the number of logical processors (e.g., CPU cores). See [ncs.conf(5)](../../resources/man/ncs.conf.5.md) in Manual Pages under `/ncs-config/transaction-limits/max-transactions` for details.
-
-
-
-### Design to Minimize Conflicts
-
-Conflicts between transactions and how to avoid them are described in [Minimizing Concurrency Conflicts](scaling-and-performance-optimization.md#ncs.development.scaling.conflicts) and in detail by the [NSO Concurrency Model](../core-concepts/nso-concurrency-model.md). While NSO can handle transaction conflicts gracefully with retries, retries affect transaction throughput performance. A simple but effective design pattern to avoid conflicts is to update one device with one Resource Facing Service (RFS) instance where service instances do not read each other's configuration changes.
-
-
-
-### Design to Minimize Service and Validation Processing Time
-
-An overly complex service or validation implementation using templates, code, and XPath expressions increases the processing required and, even if transactions are processed concurrently, will affect the wall-clock time spent processing and, thus, transaction throughput.
-
-When data processing performance is of interest, the best practice rule of thumb is to ensure that `must` and `when` statement XPath expressions in YANG models and service templates are only used as necessary and kept as simple as possible.
-
-Suppose a service creates a significant amount of configuration data for devices. In that case, it is often significantly faster to use a single MAAPI `load_config_cmds()` or `shared_set_values()` function instead of using multiple `create()` and `set()` calls or configuration template `apply()` calls.
-
-#### **Running the `perf-bulkcreate` Example Using a Single Call to MAAPI `shared_set_values()`**
-
-The [examples.ncs/scaling-performance/perf-bulkcreate](https://github.com/NSO-developer/nso-examples/tree/6.6/scaling-performance/perf-bulkcreate) example writes configuration to an access control list and a route list of a Cisco Adaptive Security Appliance (ASA) device. It uses either MAAPI Python with a configuration template, `create()` and `set()` calls, Python `shared_set_values()` and `load_config_cmds()`, or Java `sharedSetValues()` and `loadConfigCmds()` to write the configuration in XML format.
-
-To run the [examples.ncs/scaling-performance/perf-bulkcreate](https://github.com/NSO-developer/nso-examples/tree/6.6/scaling-performance/perf-bulkcreate) example using MAAPI Python `create()` and `set()` calls to create 3000 rules and 3000 routes on one device:
-
-```bash
-cd $NCS_DIR/examples.ncs/scaling-performance/perf-bulkcreate
-./measure.sh -r 3000 -t py_create -n true
-```
-
-The commit uses the `no-networking` parameter to skip pushing the configuration to the simulated and un-proportionally slow Cisco ASA netsim device. The resulting NSO progress trace:
-
-
-
-Next, run the [examples.ncs/scaling-performance/perf-bulkcreate](https://github.com/NSO-developer/nso-examples/tree/6.6/scaling-performance/perf-bulkcreate) example using a single MAAPI Python `shared_set_values()` call to create 3000 rules and 3000 routes on one device:
-
-```
-./measure.sh -r 3000 -t py_setvals_xml -n true
-```
-
-The resulting NSO progress trace:
-
-
-
-Using the MAAPI `shared_set_values()` function, the service `create` callback is, for this example, \~5x faster than using the MAAPI `create()` and `set()` functions. The total wall-clock time for the transaction is more than 2x faster, and the difference will increase for larger transactions.
-
-Stop NSO and the netsim devices:
-
-```bash
-make stop
-```
-
-### Use a Data Kicker Instead of a CDB Subscriber
-
-A kicker triggering on a CDB change, a data-kicker, should be used instead of a CDB subscriber when the action taken does not have to run inside the transaction lock, i.e., the critical section of the transaction. A CDB subscriber will be invoked inside the critical section and, thus, will have a negative impact on the transaction throughput. See [Improving Subscribers](scaling-and-performance-optimization.md#ncs.development.scaling.kicker) for more details.
-
-### Shorten the Time Used for Writing Configuration to Devices
-
-Writing to devices and other network elements that are slow to configure will stall transaction throughput if you do not enable commit queues, as transactions waiting for the transaction lock to be released cannot start configuring devices before the transaction ahead of them is done writing. For example, if one device is configured using CLI transported with [IP over Avian Carriers](https://datatracker.ietf.org/doc/html/rfc1149), the transactions, including such a device, will significantly stall transactions behind it going to devices supporting [RESTCONF](https://datatracker.ietf.org/doc/html/rfc8040) or [NETCONF](https://datatracker.ietf.org/doc/html/rfc6241) over a fast optical transport. Where transaction throughput performance is a concern, choosing devices that can be configured efficiently to implement their part of the service configuration is wise.
-
-### Running the `perf-trans` Example Using One Transaction per Device
-
-Dividing the service creation and validation work into two separate transactions, one per device, allows the work to be spread across two CPU cores in a multi-core processor. To run the [examples.ncs/scaling-performance/perf-trans](https://github.com/NSO-developer/nso-examples/tree/6.6/scaling-performance/perf-trans) example with the work divided into one transaction per device:
-
-```bash
-cd $NCS_DIR/examples.ncs/scaling-performance/perf-trans
-make stop clean NDEVS=2 python
-python3 measure.py --ntrans 2 --nwork 1 --ndtrans 1 --cqparam bypass --ddelay 1
-python3 ../common/simple_progress_trace_viewer.py $(ls logs/*.csv)
-```
-
-The resulting NSO progress trace:
-
-
-
-A sequence diagram with transactions `t1` and `t2` deploying service configuration to two devices using RESTCONF `patch` requests to NSO with NSO configuring the netsim devices using NETCONF:
-
-```
-RESTCONF service validate push config
-patch create config ndtrans=1 netsim netsim
-ntrans=2 nwork=1 nwork=1 cqparam=bypass device ddelay=1 device ddelay=1
- t1 ------> 1s -----> 1s ---------------------> ex0 ---> 1s
- t2 ------> 1s -----> 1s ---------------------------------------> ex1 ---> 1s
- wall-clock 1s 1s 1s 1s = 4s
-```
-
-Note how the service creation and validation work now is divided into 1s per transaction and runs concurrently on one CPU core each. However, the two transactions cannot push the configuration concurrently to a device each as the config push is done inside the critical section, making one of the transactions wait for the other to release the transaction lock. See the two “holding the transaction lock” events in the above progress trace visualization.
-
-To enable transactions to push configuration to devices concurrently, we must enable commit queues.
-
-### Using Commit Queues
-
-The concept of a network-wide transaction requires NSO to wait for the managed devices to process the configuration change before exiting the critical section, i.e., before NSO can release the transaction lock. In the meantime, other transactions have to wait their turn to write to CDB and the devices. The commit queue feature avoids waiting for configuration to be written to the device and increases the throughput. For most use cases, commit queues improve transaction throughput significantly.
-
-Writing to a commit queue instead of the device moves the device configuration push outside of the critical region, and the transaction lock can instead be released when the change has been written to the commit queue.
-
-
-
-For commit queue documentation, see [Commit Queue](../../operation-and-usage/operations/nso-device-manager.md#user_guide.devicemanager.commit-queue).
-
-### Enabling Commit Queues for the `perf-trans` Example
-
-Enabling commit queues allows the two transactions to spread the create, validation, and configuration push to devices work across CPU cores in a multi-core processor. Only the CDB write and commit queue write now remain inside the critical section, and the transaction lock is released as soon as the device configuration changes have been written to the commit queues instead of waiting for the config push to the devices to complete. To run the [examples.ncs/scaling-performance/perf-trans](https://github.com/NSO-developer/nso-examples/tree/6.6/scaling-performance/perf-trans) example with the work divided into one transaction per device and commit queues enabled:
-
-```bash
-make stop clean NDEVS=2 python
-python3 measure.py --ntrans 2 --nwork 1 --ndtrans 1 --cqparam sync --ddelay 1
-python3 ../common/simple_progress_trace_viewer.py $(ls logs/*.csv)
-```
-
-The resulting NSO progress trace:
-
-
-
-A sequence diagram with transactions `t1` and `t2` deploying service configuration to two devices using RESTCONF `patch` requests to NSO with NSO configuring the netsim devices using NETCONF:
-
-```
-RESTCONF service validate push config
-patch create config ndtrans=1 netsim
-ntrans=2 nwork=1 nwork=1 cqparam=sync device ddelay=1
- t1 ------> 1s -----> 1s --------------[----]---> ex0 -----> 1s
- t2 ------> 1s -----> 1s --------------[----]---> ex1 -----> 1s
- wall-clock 1s 1s 1s = 3s
-```
-
-Note how the two transactions now push the configuration concurrently to a device each as the config push is done outside of the critical section. See the two push configuration events in the above progress trace visualization.
-
-Stop NSO and the netsim devices:
-
-```bash
-make stop
-```
-
-Running the [examples.ncs/scaling-performance/perf-bulkcreate](https://github.com/NSO-developer/nso-examples/tree/6.6/scaling-performance/perf-bulkcreate) example with two devices and commit queues enabled will produce a similar result.
-
-### Simplify the Per-Device Concurrent Transaction Creation Using a Nano Service
-
-The [examples.ncs/scaling-performance/perf-trans](https://github.com/NSO-developer/nso-examples/tree/6.6/scaling-performance/perf-trans) example service uses one transaction per service instance where each service instance configures one device. This enables transactions to run concurrently on separate CPU cores in a multi-core processor. The example sends RESTCONF `patch` requests concurrently to start transactions that run concurrently with the NSO transaction manager. However, dividing the work into multiple processes may not be practical for some applications using the NSO northbound interfaces, e.g., CLI or RESTCONF. Also, it makes a future migration to LSA more complex.
-
-To simplify the NSO manager application, a resource-facing nano service (RFS) can start a process per service instance. The NSO manager application or user can then use a single transaction, e.g., CLI or RESTCONF, to configure multiple service instances where the NSO nano service divides the service instances into transactions running concurrently in separate processes.
-
-
-
-The nano service can be straightforward, for example, using a single `t3:configured` state to invoke a service template or a `create()` callback. If validation code is required, it can run in a nano service post-action, `t3:validated` state, instead of a validation point callback to keep the validation code in the process created by the nano service.
-
-
-
-See [Nano Services for Staged Provisioning](../core-concepts/nano-services.md) and [Develop and Deploy a Nano Service](../../administration/installation-and-deployment/deployment/develop-and-deploy-a-nano-service.md) for Nano service documentation.
-
-### Simplify Using a CFS and Minimize Diff-set Calculation Time
-
-A Customer Facing Service (CFS) that is stacked with the RFS and maps to one RFS instance per device can simplify the service that is exposed to the NSO northbound interfaces so that a single NSO northbound interface transaction spawns multiple transactions, for example, one transaction per RFS instance when using the `converge-on-re-deploy` YANG extension with the nano service behavior tree.
-
-
-
-Furthermore, the time spent calculating the diff-set, as seen with the `saving reverse diff-set and applying changes` event in the[ perf-bulkcreate example](scaling-and-performance-optimization.md#running-the-perf-bulkcreate-example-using-a-single-call-to-maapi-shared_set_values), can be [optimized using a stacked service design](developing-services/services-deep-dive.md#stacked-service-design).
-
-### Running the CFS and Nano Service enabled `perf-stack` Example
-
-The [examples.ncs/scaling-performance/perf-stack](https://github.com/NSO-developer/nso-examples/tree/6.6/scaling-performance/perf-stack) example showcases how a CFS on top of a simple resource-facing nano service can be implemented with the [examples.ncs/scaling-performance/perf-trans](https://github.com/NSO-developer/nso-examples/tree/6.6/scaling-performance/perf-trans) example by modifying the existing t3 RFS and adding a CFS. Instead of multiple RESTCONF transactions, the example uses a single CLI CFS service commit that updates the desired number of service instances. The commit configures multiple service instances in a single transaction where the nano service runs each service instance in a separate process to allow multiple cores to be used concurrently.
-
-
-
-Run as below to start two transactions with a 1-second CPU time workload per transaction in both the service and validation callbacks, each transaction pushing the device configuration to one device, each using a synchronous commit queue, where each device simulates taking 1 second to make the configuration changes to the device:
-
-```bash
-cd $NCS_DIR/examples.ncs/scaling-performance/perf-stack
-./showcase.sh -d 2 -t 2 -w 1 -r 1 -q 'True' -y 1
-```
-
-
-
-The above progress trace visualization is truncated to fit, but notice how the `t3:validated` state action callbacks, `t3:configured` state service creation callbacks and configuration push from the commit queues are running concurrently (on separate CPU cores) when initiating the service deployment with a single transaction started by the CLI commit.
-
-A sequence diagram describing the transaction `t1` deploying service configuration to the devices using the NSO CLI:
-
-```
- config
- CFS validate service push config change
-CLI create Nano config create ndtrans=1 netsim subscriber
-commit trans=2 RFS nwork=1 nwork=1 cq=True device ddelay=1
- t1 --> 1s -----> 1s -------[----]---> ex0 ---> 1s
- t -----> t --->
- t2 --> 1s -----> 1s -------[----]---> ex1 ---> 1s
- wall-clock 1s 1s 1s=3s
-```
-
-The two transactions run concurrently, deploying the service in \~3 seconds (plus some overhead) of wall-clock time. Like the [examples.ncs/scaling-performance/perf-trans](https://github.com/NSO-developer/nso-examples/tree/6.6/scaling-performance/perf-trans) example, you can play around with the [examples.ncs/scaling-performance/perf-stack](https://github.com/NSO-developer/nso-examples/tree/6.6/scaling-performance/perf-stack) example by tweaking the parameters.
-
-```
--d NDEVS
- The number of netsim (ConfD) devices (network elements) started.
- Default 4
-
--t NTRANS
- The number of transactions updating the same service in parallel.
- Default: $NDEVS
-
--w NWORK
- Work per transaction in the service creation and validation phases. One
- second of CPU time per work item.
- Default: 3 seconds of CPU time.
-
--r NDTRANS
- Number of devices the service will configure per service transaction.
- Default: 1
-
--c USECQ
- Use device commit queues.
- Default: True
-
--y DEV_DELAY
- Transaction delay (simulated by sleeping) on the netsim devices (seconds).
- Default: 1 second
-```
-
-See the `README` in the [examples.ncs/scaling-performance/perf-stack](https://github.com/NSO-developer/nso-examples/tree/6.6/scaling-performance/perf-stack) example for details. For even more details, see the steps in the `showcase` script.
-
-Stop NSO and the netsim devices:
-
-```bash
-make stop
-```
-
-### Migrating to and Scale Up Using an LSA Setup
-
-If the processor where NSO runs becomes a severe bottleneck, the CFS can migrate to a layered service architecture (LSA) setup. The [examples.ncs/scaling-performance/perf-stack](https://github.com/NSO-developer/nso-examples/tree/6.6/scaling-performance/perf-stack) example implements stacked services, a CFS abstracting the RFS. It allows for easy migration to an LSA setup to scale with the number of devices or network elements participating in the service deployment. While adding complexity, LSA allows exposing a single CFS instance for all processors instead of one per processor.
-
-{% hint style="info" %}
-Before considering taking on the complexity of a multi-NSO node LSA setup, make sure you have done the following:
-
-* Explored all possible avenues of design and optimization improvements described so far in this section.
-* Measured the transaction performance to find bottlenecks.
-* Optimized any bottlenecks to reduce their overhead as much as possible.
-* Observe that the available processor cores are all fully utilized.
-* Explored running NSO on a more powerful processor with more CPU cores and faster clock speed.
-* If there are more devices and RFS instances created at one point than available CPU cores, verify that increasing the number of CPU cores will result in a significant improvement. I.e., if the CPU processing spent on service creation and validation is substantial, the bottleneck, compared to writing the configuration to CDB and the commit queues and pushing the configuration to the devices.
-
-Migrating to an LSA setup should only be considered after checking all boxes for the above items.
-{% endhint %}
-
-
-
-### Running the LSA-enabled `perf-lsa` Example
-
-The [examples.ncs/scaling-performance/perf-lsa](https://github.com/NSO-developer/nso-examples/tree/6.6/scaling-performance/perf-lsa) example builds on the [examples.ncs/scaling-performance/perf-stack](https://github.com/NSO-developer/nso-examples/tree/6.6/scaling-performance/perf-stack) example and showcases an LSA setup using two RFS NSO instances, `lower-nso-1` and `lower-nso-2`, with a CFS NSO instance, `upper-nso`.
-
-
-
-You can imagine adding more RFS NSO instances, `lower-nso-3`, `lower-nso-4`, etc., to the existing two as the number of devices increases. One NSO instance per multi-core processor and at least one CPU core per device (network element) is likely the most performant setup for this simulated work example. See [LSA Overview](../../administration/advanced-topics/layered-service-architecture.md) in Layered Service Architecture for more.
-
-As an example, a variant that starts four RFS transactions with a 1-second CPU time workload per transaction in both the service and validation callbacks, each RFS transaction pushing the device configuration to 1 device using synchronous commit queues, where each device simulates taking 1 second to make the configuration changes to the device:
-
-```bash
-cd $NCS_DIR/examples.ncs/scaling-performance/perf-lsa
-./showcase.sh -d 2 -t 2 -w 1 -r 1 -q 'True' -y 1
-```
-
-The three NSO progress trace visualizations show NSO on the CFS and the two RFS nodes. Notice how the CLI commit starts a transaction on the CFS node and configures four service instances with two transactions on each RFS node to push the resulting configuration to four devices.
-
-
NSO CFS Node
-
-
NSO RFS Node 1 (Truncated to Fit)
-
-
NSO RFS Node 2 (Truncated to Fit)
-
-A sequence diagram describing the transactions on RFS 1 `t1` `t2` and RFS 2 `t1` `t2`. The transactions deploy service configuration to the devices using the NSO CLI:
-
-```
- config
- CFS validate service push config change
-CLI create Nano config create ndtrans=1 netsim subscriber
-commit ntrans=2 RFS 1 nwork=1 nwork=1 cq=True device ddelay=1
- t -----> t ---> t1 --> 1s -----> 1s -------[----]---> ex0 ---> 1s
- \ t2 --> 1s -----> 1s -------[----]---> ex1 ---> 1s
- \ RFS 2
- --> t1 --> 1s -----> 1s -------[----]---> ex2 ---> 1s
- t2 --> 1s -----> 1s -------[----]---> ex3 ---> 1s
- wall-clock 1s 1s 1s=3s
-```
-
-The four transactions run concurrently, two per RFS node, performing the work and configuring the four devices in \~3 seconds (plus some overhead) of wall-clock time.
-
-You can play with the [examples.ncs/scaling-performance/perf-lsa](https://github.com/NSO-developer/nso-examples/tree/6.6/scaling-performance/perf-lsa) example by tweaking the parameters.
-
-```
--d LDEVS
- Number of netsim (ConfD) devices (network elements) started per RFS
- NSO instance.
- Default 2 (4 total)
-
--t NTRANS
- Number of transactions updating the same service in parallel per RFS
- NSO instance. Here, one per device.
- Default: $LDEVS ($LDEVS * 2 total)
-
--w NWORK
- Work per transaction in the service creation and validation phases. One
- second of CPU time per work item.
- Default: 3 seconds of CPU time.
-
--r NDTRANS
- Number of devices the service will configure per service transaction.
- Default: 1
-
--q USECQ
- Use device commit queues.
- Default: True
-
--y DEV_DELAY
- Transaction delay (simulated by sleeping) on the netsim devices (seconds).
- Default: 1 second
-```
-
-See the `README` in the [examples.ncs/scaling-performance/perf-lsa](https://github.com/NSO-developer/nso-examples/tree/6.6/scaling-performance/perf-lsa) example for details. For even more details, see the steps in the `showcase` script.
-
-Stop NSO and the netsim devices:
-
-```bash
-make stop
-```
-
-## Scaling RAM and Disk
-
-NSO contains an internal database called CDB, which stores both configuration and operational state data. Understanding the resource consumption of NSO at a steady state requires understanding CDB, as it usually accounts for the vast majority of memory and disk usage.
-
-### CDB
-
-Since version 6.4, NSO supports different CDB persistence modes. With the traditional `in-memory-v1` mode, NSO is optimized for fast random access, making CDB an in-memory database that holds all data in RAM. NSO also keeps the data on disk for durability across system restarts, using a log structure, which is compact and fast to write.
-
-The in-memory data structure is optimized for navigating tree data and usually consumes 2 - 3x more than the size of the (compacted) on-disk format. The on-disk log will grow as more changes are performed in the system. A periodic compaction process compacts the write log and reduces its size. Upon startup of NSO, the on-disk version of CDB will be read, and the in-memory structure will be recreated based on the log. A recently compacted CDB will thus start up faster. (By default, NSO automatically determines when to compact the CDB; see [Compaction](../../administration/advanced-topics/cdb-persistence.md#compaction) for fine tuning.)
-
-The newer `on-demand-v1` persistence mode uses RAM as a cache and will try to keep memory usage below the configured amount. If there is a "cache miss," NSO needs to read the data from disk. This persistence mode uses a much more optimized on-disk format than a straight log, but disk access is still much slower than RAM. Reads of non-cached data will be slower than in the `in-memory-v1` mode.
-
-While `in-memory-v1` mode needs to fit all the data in RAM and cannot function with less, the `on-demand-v1` mode can function with less but performance for "cold" reads will be worse. If `on-demand-v1` mode is given sufficient RAM to fit all the data, performance in steady state will be very similar to that of `in-memory-v1`. The main difference will be when the data is being loaded from disk: at system startup in case of `in-memory-v1`, making startup time linear with database size; or when data is first accessed in case of `on-demand-v1`, making startup mostly independent of data size but introducing a disk-read delay on first access (with sufficient RAM, subsequent reads are served directly from memory). See [CDB Persistence](../../administration/advanced-topics/cdb-persistence.md) for further comparison of the modes.
-
-For the best performance, CDB therefore needs sufficient RAM to fit all the data, regardless of persistence mode. In addition to that, NSO also needs RAM to run all the code. However, the latter is relatively static in most setups, compared to the memory needed to hold the data.
-
-### Services and Devices in CDB
-
-CDB is a YANG-modeled database. By writing a YANG model, it is possible to store any kind of data in NSO and access it via one of the northbound interfaces of NSO. From this perspective, a service or a device's configuration is like most other YANG-modeled data. The number of service instances and managed devices in NSO in the steady state affect how much space the data consumes on disk. In case of the `in-memory-v1` persistence mode, they also directly affect memory consumption, as all data is kept in memory for fast access.
-
-But keep in mind that services tend to be modified from time to time, and with a higher total number of service instances, changes to those services are more likely. A higher number of service instances means more transactions to deploy changes, which means an increased need for optimizing transactional throughput, available CPU processing, RAM, and disk. See [Designing for Maximal Transaction Throughput](scaling-and-performance-optimization.md#ncs.development.scaling.throughput) for details.
-
-### CDB Stores the YANG Model Schema
-
-In addition to storing instance data, CDB also stores the schema (the YANG models) on disk and reads it into memory on startup. Having a large schema (many or large YANG models) loaded means both disk and RAM will be used, even when starting up an “empty” NSO, i.e., no instance data is stored in CDB.
-
-In particular, device YANG models can be of considerable size. For example, the YANG models in recent versions of Cisco IOS XR have over 750,000 lines. Loading one such NED will consume about 1 GB of RAM and slightly less disk space. In a mixed vendor network, you would load NEDs for all or some of these device types. With CDM, you can have multiple XR NEDs loaded to support communicating with different versions of XR and similarly for other devices, further consuming resources.
-
-In comparison, most CLI NEDs only model a subset of a device and are, as a result, much smaller—most often under 100,000 lines of YANG.
-
-For small NSO systems, the schema will usually consume more resources than the instance data, and NEDs, in particular, are the most significant contributors to resource consumption. As the system grows and more service and device configurations are added, the percentage of the total resource usage used for NED YANG models will decrease.
-
-{% hint style="info" %}
-NEDs with a large schema and many YANG models often include a significant number of YANG models that are unused. If RAM usage is an issue, consider removing unused YANG models from such NEDs.
-{% endhint %}
-
-#### Note on the Java VM
-
-The Java VM uses its own copy of the schema, which is also why the JVM memory consumption follows the size of the loaded YANG schema.
-
-### The Size of CDB
-
-Accurately predicting the size of CDB means accurately modeling its internal data structure. Since the result will depend on the YANG models and what actual values are stored in the database, the easiest way to understand how the size grows, is to start NSO with the schema and data in question and then measure the resource usage.
-
-Performing accurate measurements can be a tedious process or sometimes impossible. When impossible, an estimate can be reached by extrapolating from known data, which is usually much more manageable and accurate enough.
-
-We can look at the disk and RAM used for the running datastore, which stores configuration. On a freshly started NSO with `in-memory-v1` mode, it doesn't occupy much space at all:
-
-```bash
-# show ncs-state internal cdb datastore running | select ram-size | select disk-size
- DISK
-NAME SIZE RAM SIZE
-------------------------------
-running 3.83 KiB 26.27 KiB
-```
-
-### Devices, Small and Large
-
-Adding a device with a small configuration, in this case, a Cisco NXOS switch with about 700 lines of CLI configuration, there is a clear increase:
-
-```bash
-# show ncs-state internal cdb datastore running | select ram-size | select disk-size
-NAME DISK SIZE RAM SIZE
---------------------------------
-running 28.51 KiB 240.99 KiB
-```
-
-Compared to the size of CDB before we added the device, we can deduce that the device with its configuration takes up \~214 kB in RAM and 25 kB on disk. Adding 1000 such devices, we see how CDB resource consumption increases linearly with more devices. This graph shows the RAM and memory usage of the running datastore in CDB over time. We perform a sequential `sync-from` operation on the 1000 devices, and while it is executing, we see how resource consumption increases. At the end, resource consumption has reached about 150 MB of RAM and 25 MB of disk, equating to \~150 KiB of RAM and \~25 KiB of disk per device.
-
-```bash
-# request devices device * sync-from
-```
-
-{% hint style="info" %}
-The wildcard expansion in the request `devices device * sync-from` is processed by the CLI, which will iterate over the devices sequentially. This is inefficient and can be sped up by using `devices sync-from` which instead processes the devices concurrently. The sequential mode better produces a graph that better illustrates how this scales, which is why it is used here.
-{% endhint %}
-
-
-
-A device with a larger configuration will consume more space. With a single Juniper MX device that has a configuration with close to half a million lines of configuration, there's a substantial increase:
-
-```bash
-# show ncs-state internal cdb datastore running | select ram-size | select disk-size
-NAME DISK SIZE RAM SIZE
---------------------------------
-running 4.59 MiB 33.97 MiB
-```
-
-Similarly, adding more such devices allows monitoring of how it scales linearly. In the end, with 100 devices, CDB consumes 3.35 GB of RAM and 450 MB of disk, or \~33.5 MiB of RAM and \~4.5 MiB disk space per device.
-
-
-
-Thus, you must do more than dimension your NSO installation based on the number of devices. You must also understand roughly how much resources each device will consume.
-
-Unless a device uses NETCONF, NSO will not store the configuration as retrieved from the device. When configuration is retrieved, it is parsed by the NED into a structured format.
-
-For example, here is a basic BGP stanza from a Cisco IOS device:
-
-```
-router bgp 64512
-address-family ipv4 vrf TEST
-no synchronization
-redistribute connected metric 123 route-map IPV4-REDISTRIBUTE-CONNECTED-TO-BGP
-!
-```
-
-After being parsed by the IOS CLI NED, the equivalent configuration looks like this in NSO:
-
-```xml
-
-
- 64512
-
-
-
- unicast
-
- TEST
-
-
- 123
- IPV4-REDISTRIBUTE-CONNECTED-TO-BGP
-
-
-
-
-
-
-
-
-
-```
-
-A single line, such as `redistribute connected metric 123 route-map IPV4-REDISTRIBUTE-CONNECTED-TO-BGP` , is parsed into a structure of multiple nodes / YANG leaves. There is no exact correlation between the number of lines of configuration with the space it consumes in NSO. The easiest way to determine the resource consumption of a device's configuration is thus to load it into NSO and check the size of CDB before and after.
-
-### Planning Resource Consumption
-
-Forming a rough estimate of CDB resource consumption for planning can be helpful.
-
-Divide your devices into categories. Get a rough measurement for an exemplar in each category, add a safety margin, e.g., double the resource consumption, and multiply by the number of devices in that category. Example:
-
-
Device Type
RAM
Disk
Number of Devices
Margin
Total RAM
Total Disk
FTTB access switch
200KiB
25KiB
30000
100%
11718MiB
1464MiB
Mobile Base Station
120KiB
11KiB
15000
100%
3515MiB
322MiB
Business CPE
50KiB
4KiB
50000
50%
3662MiB
292MiB
PE / Edge Router
10MiB
1MiB
1000
25%
12GiB
1.2GiB
Total
20.6GiB
3.3GiB
-
-### The Size of a Service
-
-A YANG model describes the input to services, and just like any other data in CDB, it consumes resources. Compared to the typical device configuration, where even small devices often have a few hundred lines of configuration, a small service might only have a handful of configurable inputs. Even extensive services rarely have more than 50 inputs.
-
-When services write configuration, a reverse diff set is generated and saved as part of the service's private data. The more configuration a service writes, the larger its reverse diff set will be and, thus, the more resources it will consume. What appears as a small service with just a handful of inputs could consume considerable resources if it writes a lot of configuration. Similarly, we save a forward diff set by default, contributing to the size. Service metadata attributes, the back pointer list, and the recount are also added to the written configuration, which consumes some resources. For example, if 50 services all (share)create a node, there will be 50 backpointers in the database, which consumes some space.
-
-### Implications of a Large CDB
-
-As shown above, CDB scales linearly. Modern servers commonly support multiple terabytes of RAM, making it possible to support 50,000 - 100,000 such large router devices in NSO, well beyond the size of any currently existing network. However, beyond consuming RAM and disk space, the size of the CDB may also affect the startup time of NSO and certain other operations like upgrades. In the previous example, 100 devices were used, which resulted in a CDB size of 461 MB on disk. Starting that on a standard laptop takes about 100 seconds. With 50,000 devices, CDB on-disk would be over 230 GB, which would take around 6 hours to load on the same laptop, if it had enough RAM. The typical server is considerably faster than the average laptop here, but loading a large CDB may take considerable time, unless `on-demand-v1` persistence mode is used.
-
-This also affects the sync/resync time in high availability setups, where the database size increases the data transfer needed.
-
-A working system needs more than just storing the data. It must also be possible to use the devices and services and apply the necessary operations to these for the environment in which they operate. For example, it is common in brownfield environments to frequently run the `sync-from` action. Most device-related operations, including `sync-from`, can run concurrently across multiple devices in NSO. Syncing an extensive device configuration will take a few minutes or so. With 50,000 such large devices, we are looking at a total time of tens of hours or even days. Many environments require higher throughput, which could be handled using an LSA setup and spreading the devices over many NSO RFS nodes. **sync-from** is an example of an action that is easy to scale up and runs concurrently. For example, spreading the 50,000 devices over 5 NSO RFS nodes, each with 10,000 devices, would lead to a speedup close to 5x.
-
-Using LSA, multiple Resource Facing Service (RFS) nodes can be employed to spread the devices across multiple NSO instances. This allows increasing the parallelism in sync-from and other operations, as described in [Designing for Maximal Transaction Throughput](scaling-and-performance-optimization.md#ncs.development.scaling.throughput), making it possible to scale to an almost arbitrary number of devices. Similarly, the services associated with each device are also spread across the RFS nodes, making it possible to operate on them in parallel. Finally, a top CFS node communicates with all RFS nodes, making it possible to administrate the entire setup as one extensive system.
-
-## Checklists
-
-For smooth operation of NSO instances consider all of the following:
-
-* Ensure there is enough RAM for NSO to run, with _**ample**_ headroom.
-* `create()` should normally run in a few hundred milliseconds, perhaps a few seconds for extensive services.
- * Consider splitting into smaller services.
- * Stacked services allow the composition of many smaller services into a larger service. A common best-practice design pattern is to have one Resource Facing Service (RFS) instance map to one device or network element.
- * Avoid conflicts between service instances.
- * Improves performance compared to a single large service for typical modifications.
- * Only services with changed input will have their `create()` called.
- * A small change to the Customer Facing Service (CFS) that results in changes to a subset of the lower services avoids running `create()` for all lower services.
-* No external calls or `sync-from` in `create()` code.
- * Use nano-services to do external calls asynchronously.
- * Never run `sync-from` from `create()` code.
-* Carefully consider the complexity of XPath constraints, in particular around lists.
- * Avoid XPath expressions with linear scaling or worse.
- * For example, avoid checking something for every element in a list, as performance will drop radically as the list grows.
- * XPath expressions involving nested lists or comparisons between lists can lead to quadratic scaling.
-* Make sure you have an efficient transaction ID method for NEDs.
- * In the worst case, the NED will compute the transaction ID based on a config hash, which means it will fetch the entire config to compute the transaction ID.
-* Enable commit queues and ensure transactions utilize as many CPU cores in a multi-core system as possible to increase transactional throughput.
-* Ensure there are enough file descriptors available.
- * In many Linux systems, the default limit is 1024.
- * If we, for example, assume that there are 4 northbound interface ports, CLI, RESTCONF, SNMP, JSON-RPC, or similar, plus a few hundred IPC ports, x 1024 == 5120. But one might as well use the next power of two, 8192, to be on the safe side.
-* See [Enable Strict Overcommit Accounting](../../administration/installation-and-deployment/system-install.md#enable-strict-overcommit-accounting-on-the-host) or [Overcommit Inside a Container](../../administration/installation-and-deployment/containerized-nso.md#d5e8605).
-
-## Hardware Sizing
-
-### Lab Testing and Development
-
-While a minimal setup with a single CPU core and 1 GB of RAM is enough to start NSO for lab testing and development, it is recommended to have at least 2 CPU cores to avoid CPU contention and to run at least two transactions concurrently, and 4 GB of RAM to be able to load a few NEDs.
-
-Contemporary laptops typically work well for NSO service development.
-
-### Production
-
-For production systems it is recommended to have at least 8 CPU cores and with as high clock frequency as possible. This ensures all NSO processes can run without contending for the same CPU cores. More CPU cores enable more transactions to run in parallel on the same processor. For higher-scale systems, an LSA setup should be investigated together with a technical expert. See [Designing for Maximal Transaction Throughput](scaling-and-performance-optimization.md#ncs.development.scaling.throughput).
-
-With `in-memory-v1` CDB persistence mode, NSO is not very disk intensive since CDB is loaded into RAM. On startup, CDB is read from disk into memory. Therefore, for fast startups of NSO, rapid backups, and other similar administrative operations, it is recommended to use a fast disk, for example, an NVMe SSD.
-
-Disk storage plays an important role in `on-demand-v1` persistence mode, where it more directly affects query times (for "cold" queries). Recommended are the fastest disks, with as low latency as possible, such as local NVMe SSDs.
-
-Network management protocols typically consume little network bandwidth. It is often less than 10 Mbps but can burst many times that. While 10 Gbps is recommended, 1 Gbps network connectivity will usually suffice. If you use High Availability (HA), the continuous HA updates are typically relatively small and do not consume a lot of bandwidth. A low latency, preferably below 1 ms and well within 10 ms, will significantly impact performance more than increasing bandwidth beyond 1 Gbps. 10 Gbps or more can make a difference for the initial synchronization in case the nodes are not in sync and avoid congestion when doing backups over the network or similar.
-
-The in-memory portion of CDB needs to fit in RAM, and NSO needs working memory to process queries. This is a hard requirement. NSO can only function with enough memory. In case of `in-memory-v1` CDB persistence mode, less than the required amount of RAM does not lead to performance degradation - it prevents NSO from working. For example, if CDB consumes 50 GB, ensure you have at least 64 GB of RAM. There needs to be some headroom for RAM to allow temporary usage during, for example, heavy queries.
-
-Swapping is a way to use disk space as RAM, and while it can make it possible to start an NSO instance that otherwise would not fit in RAM, it would lead to terrible performance. See [Enable Strict Overcommit Accounting](../../administration/installation-and-deployment/system-install.md#enable-strict-overcommit-accounting-on-the-host) or [Overcommit Inside a Container](../../administration/installation-and-deployment/containerized-nso.md#d5e8605) for details.
-
-Provide at least 32GB of RAM and increase with the growth of CDB. As described in [Scaling RAM and Disk](scaling-and-performance-optimization.md#ncs.development.scaling.memory), the consumption of memory and disk resources for devices and services will vary greatly with the type and size of the service or device.
diff --git a/development/advanced-development/web-ui-development/README.md b/development/advanced-development/web-ui-development/README.md
deleted file mode 100644
index a95150c3..00000000
--- a/development/advanced-development/web-ui-development/README.md
+++ /dev/null
@@ -1,468 +0,0 @@
----
-description: NSO Web UI development information.
----
-
-# Web UI Development
-
-The [NSO Web UI](/operation-and-usage/webui/README.md) provides a comprehensive baseline interface designed to cover common network management needs with a focus on usability and core functionality. It serves as a reliable starting point for customers who want immediate access to essential features without additional development effort.
-
-For customers with specialized requirements—such as unique workflows, custom aesthetics, or integration with external systems—the NSO platform offers flexibility to build tailored Web UIs. This enables teams to create user experiences that precisely match their operational needs and branding guidelines.
-
-At the core of NSO’s Web UI capabilities is the northbound [JSON-RPC API](json-rpc-api.md) which adheres to the [JSON-RPC 2.0 specification](https://www.jsonrpc.org/specification) and uses HTTP/S as the transport protocol
-
-The JSON-RPC API contains a handful of methods with well-defined input `method` and `params`, along with the output `result`.
-
-In addition, the API also implements a Comet model, as long polling, to allow the client to subscribe to different server events and receive event notifications about those events in near real-time.
-
-You can call these from a browser using the modern [fetch](https://developer.mozilla.org/en-US/docs/Web/API/Fetch_API) API:
-
-{% code title="With fetch" %}
-``` javascript
-fetch('http://127.0.0.1:8080/jsonrpc', {
- method: 'POST',
- headers: {
- 'Content-Type': 'application/json'
- },
- body: JSON.stringify({
- jsonrpc: '2.0',
- id: 1,
- method: 'login',
- params: {
- user: 'admin',
- passwd: 'admin'
- }
- })
-})
-.then(response => response.json())
-.then(data => {
- if (data.result) {
- console.log(data.result);
- } else {
- console.log(data.error.type);
- }
-});
-```
-{% endcode %}
-
-Or from the command line using [curl](https://curl.se):
-
-{% code title="With curl" %}
-``` bash
-curl \
- -X POST \
- -H 'Content-Type: application/json' \
- -d '{"jsonrpc": "2.0", "id": 1, "method": "login", "params": {"user": "admin", "passwd": "admin"}}' \
- http://127.0.0.1:8080/jsonrpc
-```
-{% endcode %}
-
-
-## Example of a Common Flow
-
-You can read in the JSON-RPC API section about all the available methods and their signatures, but here is a working example of how a common flow would look like:
-
-1. Log in.
-2. Get system settings.
-3. Create a new (read) transaction handle.
-4. Read a value.
-5. Create a new (read-write) transaction, in preparation for changing the value.
-6. Set a value.
-7. Validate and commit (save) the changes.
-
-A secondary example is also provided that demonstrates the use and implementation of a Comet channel client for receiving notifications:
-
-1. Log in.
-2. Initialize comet channel subscription.
-3. Commit a change to trigger a comet notification.
-4. Stop and clean up the comet.
-
-For a complete working example with a web UI, see the `webui-basic-example` NSO package in `${NCS_DIR}/examples.ncs/northbound-interfaces/webui`. This package demonstrates basic JSON-RPC API usage and
-can be run with `make demo`.
-
-{% code title="index.js" overflow="wrap" lineNumbers="true" %}
-
-```javascript
-// The following code is purely for example purposes.
-// The code has inline comments for a better understanding.
-// Your mileage might vary.
-
-const jsonrpcUrl = 'http://127.0.0.1:8080/jsonrpc';
-const ths = {};
-let cookie;
-
-function log(msg) {
- console.log(msg);
-}
-
-function logAsciiTitle(titleText) {
- const border = '='.repeat(titleText.length + 8); // +8 for padding and corners
- const padding = ' '.repeat(titleText.length);
-
- log(''); // Add a blank line for spacing
- log(border);
- log(`== ${padding} ==`);
- log(`== ${titleText} ==`);
- log(`== ${padding} ==`);
- log(border);
- log(''); // Add a blank line for spacing
-}
-
-/**
- * CometChannel - Modern comet notification channel for NSO JSON-RPC API
- *
- * Usage:
- * const comet = new CometChannel({ jsonRpcCall, onError });
- * comet.on('notification-handle', (message) => { console.log(message); });
- * comet.stop();
- */
-class CometChannel {
- constructor(options = {}) {
- this.jsonRpcCall = options.jsonRpcCall;
- this.onError = options.onError;
- this.id = options.id || 'comet-' + String(Math.random()).substring(2);
- this.sleep = options.sleep || 1000;
-
- this.handlers = new Map();
- this.polling = false;
- this.stopped = false;
- }
-
- on(handle, callback) {
- if (!callback || typeof callback !== 'function') {
- throw new Error(`Missing callback function for handle: ${handle}`);
- }
-
- if (!this.handlers.has(handle)) {
- this.handlers.set(handle, []);
- }
-
- this.handlers.get(handle).push(callback);
-
- // Start polling if not already running
- if (!this.polling && !this.stopped) {
- this._poll();
- }
- }
-
- async stop() {
- if (this.stopped) {
- return;
- }
-
- this.stopped = true;
- this.polling = false;
-
- const handles = Array.from(this.handlers.keys());
- const unsubscribePromises = handles.map(handle =>
- this.jsonRpcCall('unsubscribe', { handle }).catch((err) => {
- console.warn(`Failed to unsubscribe from ${handle}:`, err.message);
- }),
- );
-
- await Promise.all(unsubscribePromises);
- this.handlers.clear();
- }
-
- async _poll() {
- if (this.polling || this.stopped || this.handlers.size === 0) {
- return;
- }
-
- this.polling = true;
-
- try {
- const notifications = await this.jsonRpcCall('comet', {
- comet_id: this.id,
- });
-
- if (!this.stopped) {
- await this._handleNotifications(notifications);
- }
- } catch (error) {
- if (!this.stopped) {
- this._handlePollError(error);
- return; // Don't continue polling on error, error handler will retry
- }
- } finally {
- this.polling = false;
- }
-
- // Continue polling if not stopped
- if (!this.stopped && this.handlers.size > 0) {
- setTimeout(() => this._poll(), 0);
- }
- }
-
- async _handleNotifications(notifications) {
- if (!Array.isArray(notifications)) {
- return;
- }
-
- for (const notification of notifications) {
- const { handle, message } = notification;
- const callbacks = this.handlers.get(handle);
-
- // If we received a notification with no handlers, unsubscribe
- if (!callbacks || callbacks.length === 0) {
- try {
- await this.jsonRpcCall('unsubscribe', { handle });
- } catch (error) {
- console.warn(`Failed to unsubscribe from ${handle}:`, error.message);
- }
- continue;
- }
-
- // Call all registered callbacks for this handle
- callbacks.forEach((callback) => {
- try {
- callback(message);
- } catch (error) {
- console.error(`Error in notification handler for ${handle}:`, error);
- }
- });
- }
- }
-
- _handlePollError(error) {
- const errorType = error.type || error.message;
-
- if (errorType === 'comet.duplicated_channel') {
- this.onError(error);
- this.stopped = true;
- } else {
- this.onError(error);
- // Retry after sleep interval
- setTimeout(() => this._poll(), this.sleep);
- }
- }
-}
-
-async function jsonRpcCall(method, params = {}) {
- const headers = {
- Accept: 'application/json;charset=utf-8',
- 'Content-Type': 'application/json;charset=utf-8',
- };
-
- if (cookie) {
- headers.Cookie = cookie;
- }
-
- const body = JSON.stringify({
- jsonrpc: '2.0',
- id: 1,
- method,
- params,
- });
-
- try {
- log(`REQUEST /jsonrpc/${method}:`);
- log(JSON.stringify(params, undefined, 2));
-
- const response = await fetch(jsonrpcUrl, {
- method: 'POST',
- headers,
- body,
- });
-
- if (!cookie) {
- const setCookieHeader = response.headers.get('set-cookie');
- if (setCookieHeader) {
- cookie = setCookieHeader.split(';')[0];
- }
- }
-
- if (!response.ok) {
- throw new Error(`Network error: ${response.status} ${response.statusText}`);
- }
-
- const data = await response.json();
-
- if (data.error) {
- const reasons = data.error.data
- && data.error.data.errors
- && data.error.data.errors[0]
- && data.error.data.errors[0].reason;
- let errorMessage = `JSON-RPC error: ${data.error.code} ${data.error.message}`;
-
- if (reasons) {
- errorMessage += ` (Reason: ${reasons})`;
- }
-
- throw new Error(errorMessage);
- }
-
- log(`RESPONSE /jsonrpc/${method}:`);
- log(JSON.stringify(data.result, undefined, 2));
- log('');
- return data.result;
- } catch (error) {
- log(`ERROR in ${method}: ${error.message}`);
- throw error;
- }
-}
-
-async function login() {
- return jsonRpcCall('login', { user: 'admin', passwd: 'admin' });
-}
-
-async function getSystemSetting() {
- return jsonRpcCall('get_system_setting');
-}
-
-async function newTrans(mode, tag) {
- const result = await jsonRpcCall('new_trans', { mode, tag, db: 'running' });
- ths[tag] = result.th;
- return result;
-}
-
-async function getValue(tag, valuePath) {
- const th = ths[tag];
- return jsonRpcCall('get_value', { th, path: valuePath });
-}
-
-async function setValue(tag, valuePath, newValue) {
- const th = ths[tag];
- return jsonRpcCall('set_value', { th, path: valuePath, value: newValue });
-}
-
-async function deleteValue(tag, path) {
- const th = ths[tag];
- return jsonRpcCall('delete', { th, path });
-}
-
-async function validateTrans(tag) {
- const th = ths[tag];
- try {
- return jsonRpcCall('validate_trans', { th });
- } catch (error) {
- return error.message;
- }
-}
-
-async function validateAndCommit(tag) {
- const th = ths[tag];
- await jsonRpcCall('validate_commit', { th });
- await jsonRpcCall('commit', { th });
-}
-
-const commonExample = async () => {
- try {
- const readTag = 'webui-read';
- const writeTag = 'webui-write';
- const path = '/ncs:devices/global-settings/connect-timeout';
- await login();
- await getSystemSetting();
- await newTrans('read', readTag);
- await getValue(readTag, path);
- await newTrans('read_write', writeTag);
- await setValue(writeTag, path, 20);
- await getValue(writeTag, path);
- const validationError = await validateTrans(writeTag);
- if (validationError) {
- // NOTE handle validation error if any
- }
- await validateAndCommit(writeTag);
- log(`INFO Note, using read tag: ${readTag}`);
- await getValue(readTag, path);
- } catch (error) {
- log(`ERROR Sequence aborted due to error: ${error.message}`);
- log(error);
- }
-};
-
-const cometExample = async () => {
- try {
- await login();
-
- const comet = new CometChannel({
- jsonRpcCall,
- onError: (error) => {
- log(`ERROR Comet error: ${error.message}`);
- },
- });
- const path = '/ncs:devices/global-settings/connect-timeout';
- const handle = `${comet.id}-connect-timeout`;
- log(`INFO Setting up subscription with handle: ${handle}`);
-
- comet.on(handle, (message) => {
- log('=== COMET NOTIFICATION RECEIVED ===');
- log(JSON.stringify(message, null, 2));
- log('=============================');
- });
-
- await jsonRpcCall('subscribe_changes', {
- path,
- handle,
- comet_id: comet.id,
- });
-
- // Check subscriptions are registered
- const subs = await jsonRpcCall('get_subscriptions');
- log(`INFO Active subscriptions count: ${subs.subscriptions.length}`);
-
- // Now make a change to trigger notification
- log('INFO Comiting a change to trigger comet notification...');
- const writeTag = 'test-write';
- await newTrans('read_write', writeTag);
- await setValue(writeTag, path, 42);
- await validateAndCommit(writeTag);
-
- await newTrans('read_write', writeTag);
- await deleteValue(writeTag, path);
- await validateAndCommit(writeTag);
-
- comet.stop().then(() => {
- log('INFO Comet channel stopped.');
- process.exit(0);
- });
- } catch (error) {
- log(`ERROR Comet sequence failed: ${error.message}`);
- log(error);
- }
-};
-
-(async () => {
- logAsciiTitle('Vanilla JS fetch common flow example');
- await commonExample();
-
- logAsciiTitle('Vanilla JS fetch comet example');
- await cometExample();
-})();
-
-```
-{% endcode %}
-
-
-## Single Sign-on (SSO)
-
-The Single Sign-On functionality enables users to log in via HTTP-based northbound APIs with a single sign-on authentication scheme, such as SAMLv2. Currently, it is only supported for the JSON-RPC northbound interface.
-
-{% hint style="info" %}
-For Single Sign-On to work, the Package Authentication needs to be enabled, see [Package Authentication](../../../administration/management/aaa-infrastructure.md#ug.aaa.packageauth)).
-{% endhint %}
-
-When enabled, the endpoint `/sso` is made public and handles Single Sign-on attempts.
-
-An example configuration for the cisco-nso-saml2-auth Authentication Package is presented below. Note that `/ncs-config/aaa/auth-order` does not need to be set for Single Sign-On to work!
-
-{% code title="Example: Example ncs.conf to enable SAMLv2 Single Sign-On" %}
-```xml
-
-
- true
-
- cisco-nso-saml2-auth
-
-
-
- true
-
-
-```
-{% endcode %}
-
-A client attempting single sign-on authentication should request the `/sso` endpoint and then follow the continued authentication operation from there. For example, for `cisco-nso-saml2-auth`, the client is redirected to an Identity Provider (IdP), which subsequently handles the authentication, and then redirects the client back to the `/sso` endpoint to validate the authentication and set up the session.
-
-## Web Server
-
-An embedded basic web server can be used to deliver static and Common Gateway Interface (CGI) dynamic content to a web client, such as a web browser. See [Web Server](../../connected-topics/web-server.md) for more information.
diff --git a/development/advanced-development/web-ui-development/json-rpc-api.md b/development/advanced-development/web-ui-development/json-rpc-api.md
deleted file mode 100644
index f4cdb3a8..00000000
--- a/development/advanced-development/web-ui-development/json-rpc-api.md
+++ /dev/null
@@ -1,3786 +0,0 @@
----
-description: API documentation for JSON-RPC API.
----
-
-# JSON-RPC API
-
-## Protocol Overview
-
-The [JSON-RPC 2.0 Specification](https://www.jsonrpc.org/specification) contains all the details you need to understand the protocol but a short version is given here:
-
-{% tabs %}
-{% tab title="Request Payload" %}
-A request payload typically looks like this:
-
-```json
-{"jsonrpc": "2.0",
- "id": 1,
- "method": "subtract",
- "params": [42, 23]}
-```
-
-Where, the `method` and `params` properties are as defined in this manual page.
-{% endtab %}
-
-{% tab title="Response Payload" %}
-A response payload typically looks like this:
-
-```json
-{"jsonrpc": "2.0",
- "id": 1,
- "result": 19}
-```
-
-Or:
-
-```json
-{"jsonrpc": "2.0",
- "id": 1,
- "error":
- {"code": -32601,
- "type": "rpc.request.method.not_found",
- "message": "Method not found"}}
-```
-
-The request `id` param is returned as-is in the response to make it easy to pair requests and responses.
-{% endtab %}
-{% endtabs %}
-
-The batch JSON-RPC standard is dependent on matching requests and responses by `id`, since the server processes requests in any order it sees fit e.g.:
-
-```json
-[{"jsonrpc": "2.0",
- "id": 1,
- "method": "subtract",
- "params": [42, 23]}
-,{"jsonrpc": "2.0",
- "id": 2,
- "method": "add",
- "params": [42, 23]}]
-```
-
-With a possible response like (first result for `add`, the second result for `subtract`):
-
-```json
-[{"jsonrpc": "2.0",
- "id": 2,
- "result": 65}
-,{"jsonrpc": "2.0",
- "id": 1,
- "result": 19}]
-```
-
-### Trace Context
-
-JSON-RPC supports the Trace Context functionality corresponding to the IETF Draft [I-D.draft-ietf-netconf-restconf-trace-ctx-headers-00](https://www.ietf.org/archive/id/draft-ietf-netconf-restconf-trace-ctx-headers-00.html), that is an adaption of the [W3C Trace Context](https://www.w3.org/TR/2021/REC-trace-context-1-20211123/) standard. Trace Context makes it possible to follow a client's functionality via progress trace (logging) by `trace-id`, `span-id` and `tracestate`. Trace Context standardizes the format of `trace-id`, `span-id` and key-value pairs to be sent between distributed entities. The terms `span-id` and `parent-span-id` in NSO correspond to the naming of `parent-id` used in the Trace Context standard.
-
-Trace Context consists of two HTTP headers `traceparent` and `tracestate`. Header `traceparent` must be of the format:
-
-```
-traceparent = ---
-```
-
-Where, `version = "00"` and `flags = "01"`. The support for the values of `version` and `flags` may change in the future depending on the extension of standard or functionality.
-
-An example of header `traceparent` in use is:
-
-```
-traceparent: 00-100456789abcde10123456789abcde10-001006789abcdef0-01
-```
-
-Header `tracestate` is a vendor-specific list of key-value pairs. An example of header `tracestate` in use is:
-
-```
-tracestate: key1=value1,key2=value2
-```
-
-Where, a value may contain space characters but not end with a space.
-
-NSO implements Trace Context alongside the legacy way of handling trace-id, where the trace-id comes as a flag parameter to `validate_commit`. For flags usage see method `commit`. These two different ways of handling trace-id cannot be used at the same time. If both are used, the request generates an error response.
-
-NSO will consider the headers of Trace Context in JSON-RPC requests if the element `true` is set in the logs section of the configuration file. Trace Context is handled by the progress trace functionality, see also [Progress Trace](../progress-trace.md).
-
-The information in Trace Context will be presented by the progress trace output when invoking JSON-RPC methods `validate_commit`, `apply`, or `run_action`. Those methods will also generate a Trace Context if it has not already been given in a request.
-
-The functionality a client aims to perform can consist of several JSON-RPC methods up to a transaction commit being executed. Those methods are carried out at the transaction commit and should share a common trace-id. Such a scenario calls for the need to store Trace Context in the transaction involved. For this reason JSON-RPC will only consider a Trace Context header for methods that take a transaction as parameter, with the exception of the method `commit`, which will ignore the Trace Context header.
-
-{% hint style="info" %}
-You can either let methods `validate_commit`, `apply`, or `run_action` automatically generate a Trace Context, or you can add a Trace Context header for one of the involved JSON-RPC methods sharing the same transaction.
-
-If two methods, using the same transaction, are provided with different Trace Context, the latter Trace Context will be used - a procedure not recommended.
-{% endhint %}
-
-### Common Concepts
-
-The URL for the JSON-RPC API is `` `/jsonrpc` ``. For logging and debugging purposes, you can add anything as a subpath to the URL, for example turning the URL into `` `/jsonrpc/` `` which will allow you to see the exact method in different browsers' **Developer Tools** - **Network** tab - **Name** column, rather than just an opaque `jsonrpc`.
-
-{% hint style="info" %}
-For brevity, in the upcoming descriptions of each method, only the input `params` and the output `result` are mentioned, although they are part of a fully formed JSON-RPC payload.
-{% endhint %}
-
-* Authorization is based on HTTP cookies. The response to a successful call to `login` would create a session, and set an HTTP-only cookie, and even an HTTP-only secure cookie over HTTPS, named `sessionid`. All subsequent calls are authorized by the presence and the validity of this cookie.
-* The `th` param is a transaction handle identifier as returned from a call to `new_trans`.
-* The `comet_id` param is a unique ID (decided by the client) that must be given first in a call to the `comet` method, and then to upcoming calls which trigger comet notifications.
-* The `handle` param needs to have a semantic value (not just a counter) prefixed with the `comet` ID (for disambiguation), and overrides the handle that would have otherwise been returned by the call. This gives more freedom to the client and sets semantic handles.
-
-### **Common Errors**
-
-The JSON-RPC specification defines the following error `code` values:
-
-* `-32700` - Invalid JSON was received by the server. An error occurred on the server while parsing the JSON text.
-* `-32600` - The JSON sent is not a valid Request object.
-* `-32601` - The method does not exist/is not available.
-* `-32602` - Invalid method parameter(s).
-* `-32603` - Internal JSON-RPC error.
-* `-32000` to `-32099` - Reserved for application-defined errors (see below).
-
-To make server errors easier to read, along with the numeric `code`, we use a `type` param that yields a literal error token. For all application-defined errors, the `code` is always `-32000`. It's best to ignore the `code` and just use the `type` param.
-
-```json
-{"jsonrpc": "2.0",
- "id": 1,
- "method": "login",
- "params":
- {"foo": "joe",
- "bar": "SWkkasE32"}}
-```
-
-Which results in:
-
-```json
-{"jsonrpc": "2.0",
- "id": 1,
- "error":
- {"code": -32602,
- "type": "rpc.method.unexpected_params",
- "message": "Unexpected params",
- "data":
- {"param": "foo"}}}
-```
-
-The `message` param is a free text string in English meant for human consumption, which is a one-to-one match with the `type` param. To remove noise from the examples, this param is omitted from the following descriptions.
-
-An additional method-specific `data` param may be added to give further details on the error, most predominantly a `reason` param which is also a free text string in English meant for human consumption. To remove noise from the examples, this param is omitted from the following descriptions. However any additional `data` params will be noted by each method description.
-
-### **Application-defined Errors**
-
-All methods may return one of the following JSON RPC or application-defined errors, in addition to others, specific to each method.
-
-```json
-{"type": "rpc.request.parse_error"}
-{"type": "rpc.request.invalid"}
-{"type": "rpc.method.not_found"}
-{"type": "rpc.method.invalid_params", "data": {"param": }}
-{"type": "rpc.internal_error"}
-
-
-{"type": "rpc.request.eof_parse_error"}
-{"type": "rpc.request.multipart_broken"}
-{"type": "rpc.request.too_big"}
-{"type": "rpc.request.method_denied"}
-
-
-{"type": "rpc.method.unexpected_params", "data": {"param": }}
-{"type": "rpc.method.invalid_params_type", "data": {"param": }}
-{"type": "rpc.method.missing_params", "data": {"param": }}
-{"type": "rpc.method.unknown_params_value", "data": {"param": }}
-
-
-{"type": "rpc.method.failed"}
-{"type": "rpc.method.denied"}
-{"type": "rpc.method.timeout"}
-
-{"type": "session.missing_sessionid"}
-{"type": "session.invalid_sessionid"}
-{"type": "session.overload"}
-```
-
-### FAQs
-
-
-
-What are the security characteristics of the JSON-RPC API?
-
-JSON-RPC runs on top of the embedded web server (see [Web Server](../../connected-topics/web-server.md)), which accepts HTTP and/or HTTPS.
-
-The JSON-RPC session ties the client and the server via an HTTP cookie, named `sessionid` which contains a randomly server-generated number. This cookie is not only secure (when the requests come over HTTPS), meaning that HTTPS cookies do not leak over HTTP, but even more importantly, this cookie is also HTTP-only, meaning that only the server and the browser (e.g., not the JavaScript code) have access to the cookie. Furthermore, this cookie is a session cookie, meaning that a browser restart would delete the cookie altogether.
-
-The JSON-RPC session lives as long as the user does not request to log out, as long as the user is active within a 30-minute (default value, which is configurable) time frame, and as long as there are no severe server crashes. When the session dies, the server will reply with the intention to delete any `sessionid` cookies stored in the browser (to prevent any leaks).
-
-When used in a browser, the JSON-RPC API does not accept cross-domain requests by default but can be configured to do so via the custom headers functionality in the embedded web server or by adding a reverse proxy (see [Web Server](../../connected-topics/web-server.md)).
-
-
-
-
-
-What is the proper way to use the JSON-RPC API in a CORS setup?
-
-The embedded server allows for custom headers to be set, in this case, CORS headers, like:
-
-```
-Access-Control-Allow-Origin: http://webpage.com
-Access-Control-Allow-Credentials: true
-Access-Control-Allow-Headers: Origin, Content-Type, Accept
-Access-Control-Request-Method: POST
-```
-
-A server hosted at `http://server.com` responding with these headers would mean that the JSON-RPC API can be contacted from a browser that is showing a web page from `http://webpage.com`, and will allow the browser to make POST requests, with a limited amount of headers and with credentials (i.e., cookies).
-
-This is not enough, though, because the browser also needs to be told that your JavaScript code really wants to make a CORS request. A jQuery example would look like this:
-
-```json
-// with jQuery
-$.ajax({
- type: 'post',
- url: 'http://server.com/jsonrpc',
- contentType: 'application/json',
- data: JSON.stringify({
- jsonrpc: '2.0',
- id: 1,
- method: 'login',
- params: {
- 'user': 'joe',
- 'passwd': 'SWkkasE32'
- }
- }),
- dataType: 'json',
- crossDomain: true, // CORS specific
- xhrFields: { // CORS specific
- withCredentials: true // CORS specific
- } // CORS specific
-})
-```
-
-Without this setup, you will notice that the browser will not send the `sessionid` cookie on post-login JSON-RPC calls.
-
-
-
-
-
-What is a tag/keypath?
-
-A `tagpath` is a path pointing to a specific position in a YANG module's schema.
-
-A `keypath` is a path pointing to a specific position in a YANG module's instance.
-
-These kinds of paths are used for several of the API methods (e.g., `set_value`, `get_value`, `subscribe_changes`), and could be seen as XPath path specifications in abbreviated format.
-
-Let's look at some examples using the following YANG module as input:
-
-```json
-module devices {
- namespace "http://acme.com/ns/devices";
- prefix d;
-
- container config {
- leaf description { type string; }
- list device {
- key "interface";
- leaf interface { type string; }
- leaf date { type string; }
- }
- }
-}
-```
-
-Valid tagpaths:
-
-* `` `/d:config/description` ``
-* `` `/d:config/device/interface` ``
-
-Valid keypaths:
-
-* `` `/d:config/device{eth0}/date` `` - the date leaf value within a device with an `interface` key set to `eth0`_._
-
-Note how the prefix is prepended to the first tag in the path. This prefix is compulsory.
-
-
-
-
-
-How to restrict access to methods?
-
-The AAA infrastructure can be used to restrict access to library functions using command rules:
-
-```xml
-
- webui
- webui
- ::jsonrpc:: get_schema
- read exec
- deny
-
-```
-
-Note how the command is prefixed with `::jsonrpc::`. This tells the AAA engine to apply the command rule to JSON-RPC API functions.
-
-You can read more about the command rules in [AAA Infrastructure](../../../administration/management/aaa-infrastructure.md).
-
-
-
-
-
-What is session.overload error?
-
-A series of limits are imposed on the load that one session can put on the system. This reduces the risk that a session takes over the whole system and brings it into a DoS situation.
-
-The response will include details about the limit that triggered the error.
-
-Known limits:
-
-* Only 10,000 commands/subscriptions are allowed per session.
-
-
-
-## Methods
-
-### Commands
-
-
-
-get_cmds
-
-`get_cmds` - Get a list of the session's batch commands.
-
-**Params**
-
-```json
-{}
-```
-
-**Result**
-
-```json
-{"cmds": }
-
-cmd =
- {"params":
-
-
-
-init_cmd
-
-`init_cmd` - Starts a batch command.
-
-**Note**: The `start_cmd` method must be called to actually get the batch command to generate any messages unless the `handle` is provided as input.
-
-**Note**: As soon as the batch command prints anything on stdout, it will be sent as a message and turn up as a result to your polling call to the `comet` method.
-
-**Params**
-
-```json
-{"th": ,
- "name": ,
- "args": ,
- "emulate": ,
- "width": ,
- "height": ,
- "scroll": ,
- "comet_id": ,
- "handle": }
-```
-
-* The `name` param is one of the named commands defined in `ncs.conf`.
-* The `args` param specifies any extra arguments to be provided to the command except for the ones specified in `ncs.conf`.
-* The `emulate` param specifies if terminal emulation should be enabled.
-* The `width`, `height`, `scroll` properties define the screen properties.
-
-**Result**
-
-```json
-{"handle": }
-```
-
-A handle to the batch command is returned (equal to `handle` if provided).
-
-
-
-
-
-send_cmd_data
-
-`send_cmd_data` - Sends data to batch command started with `init_cmd`_._
-
-**Params**
-
-```json
-{"handle": ,
- "data": }
-```
-
-The `handle` param is as returned from a call to `init_cmd` and the `data` param is what is to be sent to the batch command started with `init_cmd`.
-
-**Result**
-
-```json
-{}
-```
-
-**Errors (specific)**
-
-```json
-{"type": "cmd.not_initialized"}
-```
-
-
-
-
-
-start_cmd
-
-`start_cmd` - Signals that a batch command can start to generate output.
-
-**Note**: This method must be called to actually start the activity initiated by calls to one of the methods `init_cmd`.
-
-**Params**
-
-```json
-{"handle": }
-```
-
-The `handle` param is as returned from a call to `init_cmd`.
-
-**Result**
-
-```json
-{}
-```
-
-
-
-
-
-suspend_cmd
-
-`suspend_cmd` - Suspends output from a batch command.
-
-**Note**: the `init_cmd` method must have been called with the `emulate` param set to true for this to work
-
-**Params**
-
-```json
-{"handle": }
-```
-
-The `handle` param is as returned from a call to `init_cmd`.
-
-**Result**
-
-```json
-{}
-```
-
-
-
-
-
-resume_cmd
-
-`resume_cmd` - Resumes a batch command started with `init_cmd`_._
-
-**Note**: the `init_cmd` method must have been called with the `emulate` param set to `true` for this to work.
-
-**Params**
-
-```json
-{"handle": }
-```
-
-The `handle` param is as returned from a call to `init_cmd`.
-
-**Result**
-
-```json
-{}
-```
-
-
-
-
-
-stop_cmd
-
-`stop_cmd` - Stops a batch command.
-
-**Note**: This method must be called to stop the activity started by calls to one of the methods `init_cmd`.
-
-**Params**
-
-```json
-{"handle": }
-```
-
-The `handle` param is as returned from a call to `init_cmd`.
-
-**Result**
-
-```json
-{}
-```
-
-
-
-### Commands - Subscribe
-
-
-
-get_subscriptions
-
-`get_subscriptions` - Get a list of the session's subscriptions.
-
-**Params**
-
-```json
-{}
-```
-
-**Result**
-
-```json
-{"subscriptions": }
-
-subscription =
- {"params":
-
-
-
-subscribe_cdboper
-
-`subscribe_cdboper` - Starts a subscriber to operational data in CDB. Changes done to configuration data will not be seen here.
-
-**Note**: The `start_subscription` method must be called to actually get the subscription to generate any messages unless the `handle` is provided as input.
-
-**Note**: The `unsubscribe` method should be used to end the subscription.
-
-**Note**: As soon as a subscription message is generated it will be sent as a message and turn up as result to your polling call to the `comet` method.
-
-**Params**
-
-```json
-{"comet_id": ,
- "handle": ,
- "path": }
-```
-
-The `path` param is a keypath restricting the subscription messages to only be about changes done under that specific keypath.
-
-**Result**
-
-```json
-{"handle": }
-```
-
-A handle to the subscription is returned (equal to `handle` if provided).
-
-Subscription messages will end up in the `comet` method and the format of that message will be an array of changes of the same type as returned by the `subscribe_changes` method. See below.
-
-**Errors (specific)**
-
-```json
-{"type": "db.cdb_operational_not_enabled"}
-```
-
-
-
-
-
-subscribe_changes
-
-`subscribe_changes` - Starts a subscriber to configuration data in CDB. Changes done to operational data in CDB data will not be seen here. Furthermore, subscription messages will only be generated when a transaction is successfully committed.
-
-**Note**: The `start_subscription` method must be called to actually get the subscription to generate any messages, unless the `handle` is provided as input.
-
-**Note**: The `unsubscribe` method should be used to end the subscription.
-
-**Note**: As soon as a subscription message is generated, it will be sent as a message and turn up as result to your polling call to the `comet` method.
-
-**Params**
-
-```json
-{"comet_id": ,
- "handle": ,
- "path": ,
- "skip_local_changes": ,
- "hide_changes": ,
- "hide_values": }
-```
-
-The `path` param is a keypath restricting the subscription messages to only be about changes done under that specific keypath.
-
-The `skip_local_changes` param specifies if configuration changes done by the owner of the read-write transaction should generate subscription messages.
-
-The `hide_changes` and `hide_values` params specify a lower level of information in subscription messages, in case it is enough to receive just a "ping" or a list of changed keypaths, respectively, but not the new values resulted in the changes.
-
-**Result**
-
-```json
-{"handle": }
-```
-
-A handle to the subscription is returned (equal to `handle` if provided).
-
-Subscription messages will end up in the `comet` method and the format of that message will be an object such as:
-
-```json
-{"db": <"running" | "startup" | "candidate">,
- "user": ,
- "ip": ,
- "changes": }
-```
-
-The `user` and `ip` properties are the username and IP address of the committing user.
-
-The `changes` param is an array of changes of the same type as returned by the `changes` method. See above.
-
-
-
-
-
-subscribe_poll_leaf
-
-`subscribe_poll_leaf` - Starts a polling subscriber to any type of operational and configuration data (outside of CDB as well).
-
-**Note**: The `start_subscription` method must be called to actually get the subscription to generate any messages unless the `handle` is provided as input.
-
-**Note**: The `unsubscribe` method should be used to end the subscription.
-
-**Note**: As soon as a subscription message is generated, it will be sent as a message and turn up as result to your polling call to the `comet` method.
-
-**Params**
-
-```json
-{"th": ,
- "path": ,
- "interval": ,
- "comet_id": ,
- "handle": }
-```
-
-The `path` param is a keypath pointing to a leaf value.
-
-The `interval` is a timeout in seconds between when to poll the value.
-
-**Result**
-
-```json
-{"handle": }
-```
-
-A handle to the subscription is returned (equal to `handle` if provided).
-
-Subscription messages will end up in the `comet` method and the format is a simple string value.
-
-
-
-
-
-subscribe_upgrade
-
-`subscribe_upgrade` - Starts a subscriber to upgrade messages.
-
-**Note**: The `start_subscription` method must be called to actually get the subscription to generate any messages unless the `handle` is provided as input.
-
-**Note**: The `unsubscribe` method should be used to end the subscription.
-
-**Note**: As soon as a subscription message is generated, it will be sent as a message and turn up as result to your polling call to the `comet` method.
-
-**Params**
-
-```json
-{"comet_id": ,
- "handle": }
-```
-
-**Result**
-
-```json
-{"handle": }
-```
-
-A handle to the subscription is returned (equal to `handle` if provided).
-
-Subscription messages will end up in the `comet` method and the format of that message will be an object such as:
-
-```json
-{"upgrade_state": <"wait_for_init" | "init" | "abort" | "commit">,
- "timeout": }
-```
-
-
-
-
-
-subscribe_jsonrpc_batch
-
-`subscribe_jsonrpc_batch` - Starts a subscriber to JSONRPC messages for batch requests.
-
-**Note**: The `start_subscription` method must be called to actually get the subscription to generate any messages unless the `handle` is provided as input.
-
-**Note**: The `unsubscribe` method should be used to end the subscription.
-
-**Note**: As soon as a subscription message is generated it will be sent as a message and turn up as result to your polling call to the `comet` method.
-
-**Params**
-
-```json
-{"comet_id": ,
- "handle": }
-```
-
-**Result**
-
-```json
-{"handle": }
-```
-
-A handle to the subscription is returned (equal to `handle` if provided).
-
-Subscription messages will end up in the `comet` method having exact same structure like a JSON-RPC response:
-
-```json
-{"jsonrpc":"2.0",
- "result":"admin",
- "id":1}
-
-{"jsonrpc": "2.0",
- "id": 1,
- "error":
- {"code": -32602,
- "type": "rpc.method.unexpected_params",
- "message": "Unexpected params",
- "data":
- {"param": "foo"}}}
-```
-
-
-
-
-
-subscribe_progress_trace
-
-`subscribe_progress_trace` - Starts a subscriber to progress trace events.
-
-**Note**: The `start_subscription` method must be called to actually get the subscription to generate any messages unless the `handle` is provided as input.
-
-**Note**: The `unsubscribe` method should be used to end the subscription.
-
-**Note**: As soon as a subscription message is generated, it will be sent as a message and turn up as result to your polling call to the `comet` method.
-
-**Params**
-
-```json
-{"comet_id": ,
- "handle": ,
- "verbosity": <"normal" | "verbose" | "very_verbose" | "debug", default: "normal">
- "filter_context": <"webui" | "cli" | "netconf" | "rest" | "snmp" | "system" | string, optional>}
-```
-
-The `verbosity` param specifies the verbosity of the progress trace.
-
-The `filter_context` param can be used to only get progress events from a specific context For example, if `filter_context` is set to `cli` only progress trace events from the CLI are returned.
-
-**Result**
-
-```json
-{"handle": }
-```
-
-A handle to the subscription is returned (equal to `handle` if provided).
-
-Subscription messages will end up in the `comet` method and the format of that message will be an object such as:
-
-```json
-{"timestamp": ,
- "duration": ,
- "span-id": ,
- "parent-span-id": ,
- "trace-id": ,
- "session-id": ,
- "transaction-id": ,
- "datastore": ,
- "context": ,
- "subsystem": ,
- "message": ,
- "annotation": ,
- "attributes":
-
-
-
-start_subscription
-
-`start_subscription` - Signals that a subscribe command can start to generate output.
-
-**Note**: This method must be called to actually start the activity initiated by calls to one of the methods `subscribe_cdboper`, `subscribe_changes`, `subscribe_messages`, `subscribe_poll_leaf` or `subscribe_upgrade` \*\*with no `handle`.
-
-**Params**
-
-```json
-{"handle": }
-```
-
-The `handle` param is as returned from a call to `subscribe_cdboper`, `subscribe_changes`, `subscribe_messages`, `subscribe_poll_leaf` or `subscribe_upgrade`.
-
-**Result**
-
-```json
-{}
-```
-
-
-
-
-
-unsubscribe
-
-`unsubscribe` - Stops a subscriber.
-
-**Note**: This method must be called to stop the activity started by calls to one of the methods `subscribe_cdboper`, `subscribe_changes`, `subscribe_messages`, `subscribe_poll_leaf` or `subscribe_upgrade`.
-
-**Params**
-
-```json
-{"handle": }
-```
-
-The `handle` param is as returned from a call to `subscribe_cdboper`, `subscribe_changes`, `subscribe_messages`, `subscribe_poll_leaf` or `subscribe_upgrade`.
-
-**Result**
-
-```json
-{}
-```
-
-
-
-### Data
-
-
-
-create
-
-`create` - Create a list entry, a presence container, or a leaf of type empty (unless in a union, then use `set_value`).
-
-**Params**
-
-```json
-{"th": ,
- "path": }
-```
-
-The `path` param is a keypath pointing to data to be created.
-
-**Result**
-
-```json
-{}
-```
-
-**Errors (specific)**
-
-```json
-{"type": "db.locked"}
-```
-
-
-
-
-
-delete
-
-`delete` - Deletes an existing list entry, a presence container, or an optional leaf and all its children (if any).
-
-**Note**: If the permission to delete is denied on a child, the 'warnings' array in the result will contain a warning 'Some elements could not be removed due to NACM rules prohibiting access.'. The `delete` method will still delete as much as is allowed by the rules. See [AAA Infrastructure](../../../administration/management/aaa-infrastructure.md) for more information about permissions and authorization.
-
-**Params**
-
-```json
-{"th": ,
- "path": }
-```
-
-The `path` param is a keypath pointing to data to be deleted.
-
-**Result**
-
-```json
-{} |
- {"warnings": }
-```
-
-**Errors (specific)**
-
-```json
-{"type": "db.locked"}
-```
-
-
-
-
-
-exists
-
-`exists` - Checks if optional data exists.
-
-**Params**
-
-```json
-{"th": ,
- "path": }
-```
-
-The `path` param is a keypath pointing to data to be checked for existence.
-
-**Result**
-
-```json
-{"exists": }
-```
-
-
-
-
-
-get_case
-
-`get_case` - Get the case of a choice leaf.
-
-**Params**
-
-```json
-{"th": ,
- "path": ,
- "choice": }
-```
-
-The `path` param is a keypath pointing to data that contains the choice leaf given by the `choice` param.
-
-**Result**
-
-```json
-{"case": }
-```
-
-
-
-
-
-show_config
-
-`show_config` - Retrieves configuration and operational data from the provided transaction. Output can be returned in several formats (CLI, CLI-C, XML, or JSON variants), with optional pagination and filtering to control the breadth and volume of returned data.
-
-**Params**
-
-```json
-{"th": }
-```
-
-```json
-{"path": }
-```
-
-```json
-{"result_as": <"json" | "json2" | "cli" | "cli-c" | "xml", default: "cli">}
-```
-
-```json
-{"with_oper": }
-```
-
-```json
-{"max_size": }
-```
-
-```
-{"depth": }
-```
-
-```
-{"include": }
-```
-
-```
-{"exclude": }
-```
-
-```
-{"offset": }
-```
-
-```
-{"limit": }
-```
-
-The `path` param is a keypath to the configuration to be returned. `result_as` controls the output format; `cli` for CLI curly bracket format, `cli-c` for Cisco CLI style format, `xml` for XML compatible with NETCONF, `json` for JSON compatible with RESTCONF, and `json2` for a variant of the RESTCONF JSON format. `max_size` sets the maximum size of the data field in kB, set to 0 to disable the limit. The `with_oper` param, which controls if the operational data should be included, only takes effect when `result_as` is set to `json` or `json2`. `depth` limits the depth (levels) of the returned subtree below the target `path`. `include` retrieves a subset of nodes below the target `path`, similar to the [RESTCONF fields query parameter](../../core-concepts/northbound-apis/restconf-api.md#d5e1600). `exclude` excludes a subset of nodes below the target `path`, similar to the [RESTCONF exclude query parameter.](../../core-concepts/northbound-apis/restconf-api.md#the-exclude-query-parameter) `offset` controls the number of list elements to skip before returning the requested set of entries. `limit` controls the of list entries to retrieve.
-
-**Result**
-
-The `result_as` param when set to `cli`, `cli-c`, or `xml` :
-
-```json
-{"config": }
-```
-
-The `result_as` param when set to `json` or `json2`:
-
-```json
-{"data": }
-```
-
-
-
-
-
-load
-
-`load` - Load XML configuration into the current transaction.
-
-**Params**
-
-```json
-{"th": ,
- "data":
- "path":
- "format": <"json" | "xml", default: "xml">
- "mode": <"create" | "merge" | "replace", default: "merge">}
-```
-
-The `data` param is the data to be loaded into the transaction. `mode` controls how the data is loaded into the transaction, analogous with the CLI command load. `format` informs load about which format `data` is in. If `format` is `xml`, the data must be an XML document encoded as a string. If `format` is `json`, data can either be a JSON document encoded as a string or the JSON data itself.
-
-**Result**
-
-```json
-{}
-```
-
-**Errors (specific)**
-
-```json
-{"row": , "message": }
-```
-
-
-
-### Data - Attributes
-
-
-
-get_attrs
-
-`get_attrs` - Get node attributes.
-
-**Params**
-
-```json
-{"th": ,
- "path": ,
- "names": }
-```
-
-The `path` param is a keypath pointing to the node and the `names` param is a list of attribute names that you want to retrieve.
-
-**Result**
-
-```json
-{"attrs":
-
-
-
-set_attrs
-
-`set_attrs` - Set node attributes.
-
-**Params**
-
-```json
-{"th": ,
- "path": ,
- "attrs":
-
-### Data - Leaves
-
-
-
-get_value
-
-`get_value` - Gets a leaf value.
-
-**Params**
-
-```json
-{"th": ,
- "path": ,
- "check_default": }
-```
-
-The `path` param is a keypath pointing to a value.
-
-The `check_default` param adds `is_default` to the result if set to `true`. `is_default` is set to `true` if the default value handling returned the value.
-
-**Result**
-
-```json
-{"value": }
-```
-
-**Example**
-
-{% code title="Example: Method get_value" %}
-```bash
-curl \
- --cookie 'sessionid=sess12541119146799620192;' \
- -X POST \
- -H 'Content-Type: application/json' \
- -d '{"jsonrpc": "2.0", "id": 1,
- "method": "get_value",
- "params": {"th": 4711,
- "path": "/dhcp:dhcp/max-lease-time"}}' \
- http://127.0.0.1:8008/jsonrpc
-
-{
- "jsonrpc": "2.0",
- "id": 1,
- "result": {"value": "7200"}
-}
-```
-{% endcode %}
-
-
-
-
-
-get_values
-
-`get_values` - Get leaf values.
-
-**Params**
-
-```json
-{"th": ,
- "path": ,
- "check_default": ,
- "leafs": }
-```
-
-The `path` param is a keypath pointing to a container. The `leafs` param is an array of children names residing under the parent container in the YANG module.
-
-The `check_default` param adds `is_default` to the result if set to `true`. `is_default` is set to `true` if the default value handling returned the value.
-
-**Result**
-
-```json
-{"values": }
-
-value = {"value": , "access": }
-error = {"error": , "access": } |
- {"exists": true, "access": } |
- {"not_found": true, "access": }
-access = {"read": true, write: true}
-```
-
-**Note**: The access object has no `read` and/or `write` properties if there are no read and/or access rights.
-
-
-
-
-
-set_value
-
-`set_value` - Sets a leaf value.
-
-**Params**
-
-```json
-{"th": ,
- "path": ,
- "value": ,
- "dryrun": }
-```
-
-**Errors (specific)**
-
-```json
-{"type": "data.already_exists"}
-{"type": "data.not_found"}
-{"type": "data.not_writable"}
-{"type": "db.locked"}
-```
-
-**Example**
-
-{% code title="Example: Method set_value" %}
-```bash
-curl \
- --cookie 'sessionid=sess12541119146799620192;' \
- -X POST \
- -H 'Content-Type: application/json' \
- -d '{"jsonrpc": "2.0", "id": 1,
- "method": "set_value",
- "params": {"th": 4711,
- "path": "/dhcp:dhcp/max-lease-time",
- "value": "4500"}}' \
- http://127.0.0.1:8008/jsonrpc
-
-{"jsonrpc": "2.0",
- "id": 1,
- "result": {}
-}
-```
-{% endcode %}
-
-
-
-### Data - Leafref
-
-
-
-deref
-
-`deref` - Dereferences a leaf with a leafref type.
-
-**Params**
-
-```json
-{"th": ,
- "path": ,
- "result_as": <"paths" | "target" | "list-target", default: "paths">}
-```
-
-The `path` param is a keypath pointing to a leaf with a leafref type.
-
-**Result**
-
-```json
-{"paths": }
-```
-
-```json
-{"target": }
-```
-
-```json
-{"list-target": }
-```
-
-
-
-
-
-get_leafref_values
-
-`get_leafref_values` - Gets all possible values for a leaf with a leafref type.
-
-**Params**
-
-```json
-{"th": ,
- "path": ,
- "offset": ,
- "limit": ,
- "starts_with": ,
- "skip_grouping": ,
- "keys":
-
-### Data - Lists
-
-
-
-rename_list_entry
-
-`rename_list_entry` - Renames a list entry.
-
-**Params**
-
-```json
-{"th": ,
- "from_path": ,
- "to_keys": }
-```
-
-The `from_path` is a keypath pointing out the list entry to be renamed.
-
-The list entry to be renamed will, under the hood, be deleted all together and then recreated with the content from the deleted list entry copied in.
-
-The `to_keys` param is an array with the new key values. The array must contain a full set of key values.
-
-**Result**
-
-```json
-{}
-```
-
-**Errors (specific)**
-
-```json
-{"type": "data.already_exists"}
-{"type": "data.not_found"}
-{"type": "data.not_writable"}
-```
-
-
-
-
-
-copy_list_entry
-
-`copy_list_entry` - Copies a list entry.
-
-**Params**
-
-```json
-{"th": ,
- "from_path": ,
- "to_keys": }
-```
-
-The `from_path` is a keypath pointing out the list entry to be copied.
-
-The `to_keys` param is an array with the new key values. The array must contain a full set of key values.
-
-Copying between different ned-id versions works as long as the schema nodes being copied has not changed between the versions.
-
-**Result**
-
-```json
-{}
-```
-
-**Errors (specific)**
-
-```json
-{"type": "data.already_exists"}
-{"type": "data.not_found"}
-{"type": "data.not_writable"}
-```
-
-
-
-
-
-move_list_entry
-
-`move_list_entry` - Moves an ordered-by user list entry relative to its siblings.
-
-**Params**
-
-```json
-{"th": ,
- "from_path": ,
- "to_path": ,
- "mode": <"first" | "last" | "before" | "after">}
-```
-
-The `from_path` is a keypath pointing out the list entry to be moved.
-
-The list entry to be moved can either be moved to the first or the last position, i.e. if the `mode` param is set to `first` or `last` the `to_path` keypath param has no meaning.
-
-If the `mode` param is set to `before` or `after` the `to_path` param must be specified, i.e. the list entry will be moved to the position before or after the list entry which the `to_path` keypath param points to.
-
-**Result**
-
-```json
-{}
-```
-
-**Errors (specific)**
-
-```json
-{"type": "db.locked"}
-```
-
-
-
-
-
-append_list_entry
-
-`append_list_entry` - Append a list entry to a leaf-list.
-
-**Params**
-
-```json
-{"th": ,
- "path": ,
- "value": }
-```
-
-The `path` is a keypath pointing to a leaf-list.
-
-**Result**
-
-```json
-{}
-```
-
-
-
-
-
-count_list_keys
-
-`count_list_keys` - Counts the number of keys in a list.
-
-**Params**
-
-```json
-{"th":
- "path": }
-```
-
-The `path` parameter is a keypath pointing to a list.
-
-**Result**
-
-```json
-{"count": }
-```
-
-
-
-
-
-get_list_keys
-
-`get_list_keys` - Enumerates keys in a list.
-
-**Params**
-
-```json
-{"th": ,
- "path": ,
- "chunk_size": ,
- "start_with": ,
- "lh": ,
- "empty_list_key_as_null": }
-```
-
-The `th` parameter is the transaction handle.
-
-The `path` parameter is a keypath pointing to a list. Required on first invocation - optional in following.
-
-The `chunk_size` parameter is the number of requested keys in the result. Optional - default is unlimited.
-
-The `start_with` parameter will be used to filter out all those keys that do not start with the provided strings. The parameter supports multiple keys e.g. if the list has two keys, then `start_with` can hold two items.
-
-The `lh` (list handle) parameter is optional (on the first invocation) but must be used in the following invocations.
-
-The `empty_list_key_as_null` parameter controls whether list keys of type empty are represented as the name of the list key (default) or as \`\[null]\`.
-
-**Result**
-
-```json
-{"keys": ,
- "total_count": ,
- "lh": }
-```
-
-Each invocation of `get_list_keys` will return at most `chunk_size` keys. The returned `lh` must be used in the following invocations to retrieve the next chunk of keys. When no more keys are available the returned `lh` will be set to \`-1\`.
-
-On the first invocation `lh` can either be omitted or set to \`-1\`.
-
-
-
-### Data - Query
-
-
-
-query
-
-`query` - Starts a new query attached to a transaction handle, retrieves the results, and stops the query immediately. This is a convenience method for calling `start_query`, `run_query` and `stop_query` in a one-time sequence.
-
-This method should not be used for paginated results, as it results in performance degradation - use `start_query`, multiple `run_query` and `stop_query` instead.
-
-**Example**
-
-{% code title="Example: Method query" %}
-```bash
-curl \
- --cookie "sessionid=sess11635875109111642;" \
- -X POST \
- -d '{"jsonrpc": "2.0", "id": 1,
- "method": "query",
- "params": {"th": 1,
- "xpath_expr": "/dhcp:dhcp/dhcp:foo",
- "result_as": "keypath-value"}}' \
- http://127.0.0.1:8008/jsonrpc
-
-{"jsonrpc": "2.0",
- "id": 1,
- "result":
- {"current_position": 2,
- "total_number_of_results": 4,
- "number_of_results": 2,
- "number_of_elements_per_result": 2,
- "results": ["foo", "bar"]}}
-```
-{% endcode %}
-
-
-
-
-
-start_query
-
-`start_query` - Starts a new query attached to a transaction handle. On success, a query handle is returned to be in subsequent calls to `run_query`.
-
-**Params**
-
-```json
-{"th": ,
- "xpath_expr": ,
- "path": ,
- "selection":
- "chunk_size":
- "initial_offset": ,
- "sort", ,
- "sort_order": <"ascending" | "descending", optional>,
- "include_total": ,
- "context_node": ,
- "result_as": <"string" | "keypath-value" | "leaf_value_as_string", default: "string">}
-```
-
-The `xpath_expr` param is the primary XPath expression to base the query on. Alternatively, one can give a keypath as the `path` param, and internally the keypath will be translated into an XPath expression.
-
-A query is a way of evaluating an XPath expression and returning the results in chunks. The primary XPath expression must evaluate to a node-set, i.e. the result. For each node in the result, a `selection` Xpath expression is evaluated with the result node as its context node.
-
-**Note**: The terminology used here is as defined in http://en.wikipedia.org/wiki/XPath.
-
-For example, given this YANG snippet:
-
-```yang
-list interface {
- key name;
- unique number;
- leaf name {
- type string;
- }
- leaf number {
- type uint32;
- mandatory true;
- }
- leaf enabled {
- type boolean;
- default true;
- }
-}
-```
-
-The `xpath_expr` could be \``/interface[enabled='true']`\` and `selection` could be \``{ "name", "number" }`\`.
-
-Note that the `selection` expressions must be valid XPath expressions, e.g. to figure out the name of an interface and whether its number is even or not, the expressions must look like: \``{ "name", "(number mod 2) == 0" }`\`.
-
-The result are then fetched using `run_query`, which returns the result on the format specified by `result_as` param.
-
-There are two different types of results:
-
-* `string` result is just an array with resulting strings of evaluating the `selection` XPath expressions
-* \``keypath-value`\` result is an array the keypaths or values of the node that the `selection` XPath expression evaluates to.
-
-This means that care must be taken so that the combination of `selection` expressions and return types actually yield sensible results (for example \``1 + 2`\` is a valid `selection` XPath expression, and would result in the string `3` when setting the result type to `string` - but it is not a node, and thus have no keypath-value.
-
-It is possible to sort the result using the built-in XPath function \``sort-by()`\` but it is also also possible to sort the result using expressions specified by the `sort` param. These expressions will be used to construct a temporary index which will live as long as the query is active. For example, to start a query sorting first on the enabled leaf, and then on number one would call:
-
-```
-$.post("/jsonrpc", {
- jsonrpc: "2.0",
- id: 1,
- method: "start_query",
- params: {
- th: 1,
- xpath_expr: "/interface[enabled='true']",
- selection: ["name", "number", "enabled"],
- sort: ["enabled", "number"]
- }
-})
- .done(...);
-```
-
-The `context_node` param is a keypath pointing out the node to apply the query on; only taken into account when the `xpath_expr` uses relatives paths. Lack of a `context_node`, turns relatives paths into absolute paths.
-
-The `chunk_size` param specifies how many result entries to return at a time. If set to `0`, a default number will be used.
-
-The `initial_offset` param is the result entry to begin with (`1` means to start from the beginning).
-
-**Result**
-
-```json
-{"qh": }
-```
-
-A new query handler handler id to be used when calling _run\_query_ etc
-
-**Example**
-
-{% code title="Example: Method start_query" %}
-```bash
-curl \
- --cookie "sessionid=sess11635875109111642;" \
- -X POST \
- -d '{"jsonrpc": "2.0", "id": 1,
- "method": "start_query",
- "params": {"th": 1,
- "xpath_expr": "/dhcp:dhcp/dhcp:foo",
- "result_as": "keypath-value"}}' \
- http://127.0.0.1:8008/jsonrpc
-
-{"jsonrpc": "2.0",
- "id": 1,
- "result": 47}
-```
-{% endcode %}
-
-
-
-
-
-run_query
-
-`run_query` - Retrieves the result to a query (as chunks). For more details on queries, read the description of [`start_query`](json-rpc-api.md#start_query).
-
-**Params**
-
-```json
-{"qh": }
-```
-
-The `qh` param is as returned from a call to `start_query`.
-
-**Result**
-
-```json
-{"position": ,
- "total_number_of_results": ,
- "number_of_results": ,
- "chunk_size": ,
- "result_as": <"string" | "keypath-value" | "leaf_value_as_string">,
- "results": }
-
-result = |
- {"keypath": , "value": }
-```
-
-The `position` param is the number of the first result entry in this chunk, i.e. for the first chunk it will be 1.
-
-How many result entries there are in this chunk is indicated by the `number_of_results` param. It will be 0 for the last chunk.
-
-The `chunk_size` and the `result_as` properties are as given in the call to `start_query`.
-
-The `total_number_of_results` param is total number of result entries retrieved so far.
-
-The `result` param is as described in the description of `start_query`.
-
-**Example**
-
-{% code title="Example: Method run_query" %}
-```bash
-curl \
- --cookie "sessionid=sess11635875109111642;" \
- -X POST \
- -H 'Content-Type: application/json' \
- -d '{"jsonrpc": "2.0", "id": 1,
- "method": "run_query",
- "params": {"qh": 22}}' \
- http://127.0.0.1:8008/jsonrpc
-
-{"jsonrpc": "2.0",
- "id": 1,
- "result":
- {"current_position": 2,
- "total_number_of_results": 4,
- "number_of_results": 2,
- "number_of_elements_per_result": 2,
- "results": ["foo", "bar"]}}
-```
-{% endcode %}
-
-
-
-
-
-reset_query
-
-`reset_query` - Reset/rewind a running query so that it starts from the beginning again. The next call to `run_query` will then return the first chunk of result entries.
-
-**Params**
-
-```json
-{"qh": }
-```
-
-The `qh` param is as returned from a call to `start_query`.
-
-**Result**
-
-```json
-{}
-```
-
-**Example**
-
-{% code title="Example: Method reset_query" %}
-```bash
-curl \
- --cookie 'sessionid=sess12541119146799620192;' \
- -X POST \
- -H 'Content-Type: application/json' \
- -d '{"jsonrpc": "2.0", "id": 1,
- "method": "reset_query",
- "params": {"qh": 67}}' \
- http://127.0.0.1:8008/jsonrpc
-
-{"jsonrpc": "2.0",
- "id": 1,
- "result": true}
-```
-{% endcode %}
-
-
-
-
-
-stop_query
-
-`stop_query` - Stops the running query identified by query handler. If a query is not explicitly closed using this call, it will be cleaned up when the transaction the query is linked to ends.
-
-**Params**
-
-```json
-{"qh": }
-```
-
-The `qh` param is as returned from a call to `start_query`.
-
-**Result**
-
-```json
-{}
-```
-
-**Example**
-
-{% code title="Example: Method stop_query" %}
-```bash
-curl \
- --cookie 'sessionid=sess12541119146799620192;' \
- -X POST \
- -H 'Content-Type: application/json' \
- -d '{"jsonrpc": "2.0", "id": 1,
- "method": "stop_query",
- "params": {"qh": 67}}' \
- http://127.0.0.1:8008/jsonrpc
-
-{"jsonrpc": "2.0",
- "id": 1,
- "result": true}
-```
-{% endcode %}
-
-
-
-### Database
-
-
-
-reset_candidate_db
-
-`reset_candidate_db` - Resets the candidate datastore.
-
-**Result**
-
-```json
-{}
-```
-
-
-
-
-
-lock_db
-
-`lock_db` - Takes a database lock.
-
-**Params**
-
-```json
-{"db": <"startup" | "running" | "candidate">}
-```
-
-The `db` param specifies which datastore to lock.
-
-**Result**
-
-```json
-{}
-```
-
-**Errors (specific)**
-
-```json
-{"type": "db.locked", "data": {"sessions": }}
-```
-
-The \``data.sessions`\` param is an array of strings describing the current sessions of the locking user, e.g., an array of "admin tcp (cli from 192.245.2.3) on since 2006-12-20 14:50:30 exclusive".
-
-
-
-
-
-unlock_db
-
-`unlock_db` - Releases a database lock.
-
-**Params**
-
-```json
-{"db": <"startup" | "running" | "candidate">}
-```
-
-The `db` param specifies which datastore to unlock.
-
-**Result**
-
-```json
-{}
-```
-
-
-
-
-
-copy_running_to_startup_db
-
-`copy_running_to_startup_db` - Copies the running datastore to the startup datastore.
-
-**Result**
-
-```json
-{}
-```
-
-
-
-### General
-
-
-
-comet
-
-`comet` - Listens on a comet channel, i.e. all asynchronous messages from batch commands started by calls to `start_cmd`, `subscribe_cdboper`, `subscribe_changes`, `subscribe_messages`, `subscribe_poll_leaf`, or `subscribe_upgrade` ends up on the comet channel.
-
-You are expected to have a continuous long polling call to the `comet` method at any given time. As soon as the browser or server closes the socket, due to browser or server connect timeout, the `comet` method should be called again.
-
-As soon as the `comet` method returns with values they should be dispatched and the `comet` method should be called again.
-
-**Params**
-
-```json
-{"comet_id": }
-```
-
-**Result**
-
-```
-[{"handle": ,
- "message": },
- ...]
-```
-
-**Errors (specific)**
-
-```json
-{"type": "comet.duplicated_channel"}
-```
-
-**Example**
-
-{% code title="Example: Method comet" %}
-```bash
-curl \
- --cookie 'sessionid=sess12541119146799620192;' \
- -X POST \
- -H 'Content-Type: application/json' \
- -d '{"jsonrpc": "2.0", "id": 1,
- "method": "subscribe_changes",
- "params": {"comet_id": "main",
- "path": "/dhcp:dhcp"}}' \
- http://127.0.0.1:8008/jsonrpc
-
-{"jsonrpc": "2.0",
- "id": 1,
- "result": {"handle": "2"}}
-
-curl \
- --cookie 'sessionid=sess12541119146799620192;' \
- -X POST \
- -H 'Content-Type: application/json' \
- -d '{"jsonrpc": "2.0", "id": 1,
- "method": "start_cmd",
- "params": {"handle": "2"}}' \
- http://127.0.0.1:8008/jsonrpc
-
-{"jsonrpc": "2.0",
- "id": 1,
- "result": {}}
-
-curl \
- -m 15 \
- --cookie 'sessionid=sess12541119146799620192;' \
- -X POST \
- -H 'Content-Type: application/json' \
- -d '{"jsonrpc": "2.0", "id": 1,
- "method": "comet",
- "params": {"comet_id": "main"}}' \
- http://127.0.0.1:8008/jsonrpc
-```
-{% endcode %}
-
-Hangs, and finally:
-
-```json
-{"jsonrpc": "2.0",
- "id": 1,
- "result":
- [{"handle": "1",
- "message":
- {"db": "running",
- "changes":
- [{"keypath": "/dhcp:dhcp/default-lease-time",
- "op": "value_set",
- "value": "100"}],
- "user": "admin",
- "ip": "127.0.0.1"}}]}
-```
-
-In this case, the admin user seems to have set \`/dhcp:dhcp/default-lease-time\` to 100.
-
-
-
-
-
-get_system_setting
-
-`get_system_setting` - Extracts system settings such as capabilities, supported datastores, etc.
-
-**Params**
-
-```json
-{"operation": <"capabilities" | "customizations" | "models" | "user" | "version" | "all" | "namespaces", default: "all">}
-```
-
-The `operation` param specifies which system setting to get:
-
-* `capabilities` - the server-side settings are returned, e.g. is rollback and confirmed commit supported.
-* `customizations` - an array of all WebUI customizations.
-* `models` - an array of all loaded YANG modules are returned, i.e. prefix, namespace, name.
-* `user` - the username of the currently logged in user is returned.
-* `version` - the system version.
-* `all` - all of the above is returned.
-* (DEPRECATED) `namespaces` - an object of all loaded YANG modules are returned, i.e. prefix to namespace.
-
-**Result**
-
-```json
-{"user:" ,
- "models:" ,
- "version:" ,
- "customizations": ,
- "capabilities":
- {"rollback": ,
- "copy_running_to_startup": ,
- "exclusive": ,
- "confirmed_commit":
- },
- "namespaces":
-
-
-
-abort
-
-`abort` - Abort a JSON-RPC method by its associated ID.
-
-**Params**
-
-```json
-{"id": }
-```
-
-The `id` param is the id of the JSON-RPC method to be aborted.
-
-**Result**
-
-```json
-{}
-```
-
-
-
-
-
-eval_XPath
-
-`eval_XPath` - Evaluates an xpath expression on the server side.
-
-**Params**
-
-```json
-{"th": ,
- "xpath_expr": }
-```
-
-The `xpath_expr` param is the XPath expression to be evaluated.
-
-**Result**
-
-```json
-{"value": }
-```
-
-
-
-### Messages
-
-
-
-send_message
-
-`send_message` - Sends a message to another user in the CLI or Web UI.
-
-**Params**
-
-```json
-{"to": ,
- "message": }
-```
-
-The `to` param is the user name of the user to send the message to and the `message` param is the actual message.
-
-**Note**: The username `all` will broadcast the message to all users.
-
-**Result**
-
-```json
-{}
-```
-
-
-
-
-
-subscribe_messages
-
-`subscribe_messages` - Starts a subscriber to messages.
-
-**Note**: The `start_subscription` method must be called to actually get the subscription to generate any messages unless the `handle` is provided as input.
-
-**Note**: The `unsubscribe` method should be used to end the subscription.
-
-**Note**: As soon as a subscription message is generated, it will be sent as a message and turn up as a result to your polling call to the `comet` method.
-
-**Params**
-
-```json
-{"comet_id": ,
- "handle": }
-```
-
-**Result**
-
-```xml
-
-```
-
-A handle to the subscription is returned (equal to `handle` if provided).
-
-Subscription messages will end up in the `comet` method and the format of these messages depend on what has happened.
-
-When a new user has logged in:
-
-```json
-{"new_user":
- "me":
- "user": ,
- "proto": <"ssh" | "tcp" | "console" | "http" | "https" | "system">,
- "ctx": <"cli" | "webui" | "netconf">
- "ip": ,
- "login": }
-```
-
-When a user logs out:
-
-```json
-{"del_user": ,
- "user": }
-```
-
-When receiving a message:
-
-```json
-{"sender": ,
- "message": }
-```
-
-
-
-### Schema
-
-
-
-get_description
-
-`get_description` - Get description. To be able to get the description in the response, the `fxs` file needs to be compiled with the flag `--include-doc`. This operation can be heavy so instead of calling get\_description directly, we can confirm that there is a description before calling in `CS_HAS_DESCR` flag that we get from `get_schema` response.
-
-**Params**
-
-```json
-{"th": ,
- "path": }
-```
-
-A `path` is a tagpath/keypath pointing into a specific sub-tree of a YANG module.
-
-**Result**
-
-```json
-{"description": }
-```
-
-
-
-
-
-get_deps
-
-`get_deps` - Retrieve all dependency instances for a specific node instance. There are four sources of dependencies: `must`, `when`, `tailf:display-when` statements, and the `path` statement of a leafref. Each dependency type will be returned separately in its corresponding field: `must`, `when`, `display_when`, and `ref_node`.
-
-**Params**
-
-```json
-{"th": ,
- "path": }
-```
-
-The `path` param is a keypath pointing to an existing node.
-
-**Result**
-
-```json
-{"must": ,
- "when": ,
- "display_when": ,
- "ref_node": }
-```
-
-
-
-
-
-get_schema
-
-`get_schema` - Exports a JSON schema for a selected part (or all) of a specific YANG module (with optional instance data inserted).
-
-**Params**
-
-```json
-{"th": ,
- "namespace": ,
- "path": ,
- "levels": ,
- "insert_values": ,
- "evaluate_when_entries": ,
- "stop_on_list": ,
- "cdm_namespace": }
-```
-
-One of the properties `namespace` or `path` must be specified.
-
-A `namespace` is as specified in a YANG module.
-
-A `path` is a tagpath/keypath pointing into a specific sub-tree of a YANG module.
-
-The `levels` param limits the maximum depth of containers and lists from which a JSON schema should be produced (-1 means unlimited depth).
-
-The `insert_values` param signals that instance data for leafs should be inserted into the schema. This way the need for explicit forthcoming calls to `get_elem` are avoided.
-
-The `evaluate_when_entries` param signals that schema entries should be included in the schema even though their `when` or `tailf:display-when` statements evaluate to false, i.e. instead a boolean `evaluated_when_entry` param is added to these schema entries.
-
-The `stop_on_list` param limits the schema generation to one level under the list when true.
-
-The `cdm_namespace` param signals the inclusion of `cdm-namespace` entries where appropriate.
-
-**Result**
-
-```json
-{"meta":
- {"namespace": ,
- "keypath": ,
- "prefix": ,
- "types": },
- "data": }
-
-type = : }>
-
-type_stack =
-
-type_stack_entry =
- {"bits": , "size": <32 | 64>} |
- {"leaf_type": , "list_type": } |
- {"union": } |
- {"name": ,
- "info": ,
- "readonly": ,
- "facets": }
-
-primitive_type =
- "empty" |
- "binary" |
- "bits" |
- "date-and-time" |
- "instance-identifier" |
- "int64" |
- "int32" |
- "int16" |
- "uint64" |
- "uint32" |
- "uint16" |
- "uint8" |
- "ip-prefix" |
- "ipv4-prefix" |
- "ipv6-prefix" |
- "ip-address-and-prefix-length" |
- "ipv4-address-and-prefix-length" |
- "ipv6-address-and-prefix-length" |
- "hex-string" |
- "dotted-quad" |
- "ip-address" |
- "ipv4-address" |
- "ipv6-address" |
- "gauge32" |
- "counter32" |
- "counter64" |
- "object-identifier"
-
-facet_entry =
- {"enumeration": {"label": , "info": }} |
- {"fraction-digits": {"value": }} |
- {"length": {"value": }} |
- {"max-length": {"value": }} |
- {"min-length": {"value": }} |
- {"leaf-list": } |
- {"max-inclusive": {"value": }} |
- {"max-length": {"value": }} |
- {"range": {"value": }} |
- {"min-exclusive": {"value": }} |
- {"min-inclusive": {"value": }} |
- {"min-length": {"value": }} |
- {"pattern": {"value": }} |
- {"total-digits": {"value": }}
-
-range_entry =
- "min" |
- "max" |
- |
- [, ]
-
-child =
- {"kind": ,
- "name": ,
- "qname": ,
- "info": ,
- "namespace": ,
- "xml-namespace": ,
- "is_action_input": ,
- "is_action_output": ,
- "is_cli_preformatted": ,
- "is_mount_point":
- "presence": ,
- "ordered_by": ,
- "is_config_false_callpoint": ,
- "key": ,
- "exists": ,
- "value": ,
- "is_leafref": ,
- "leafref_target": ,
- "when_targets": ,
- "deps":
- "hidden": ,
- "default_ref":
- {"namespace": ,
- "tagpath":
- },
- "access":
- {"create": ,
- "update": ,
- "delete": ,
- "execute":
- },
- "config": ,
- "readonly": ,
- "suppress_echo": ,
- "type":
- {"name": ,
- "primitive":
- }
- "generated_name": ,
- "units": ,
- "leafref_groups": ,
- "active": ,
- "cases": ,
- "default": ,
- "mandatory": ,
- "children":
- }
-
-kind =
- "module" |
- "access-denies" |
- "list-entry" |
- "choice" |
- "key" |
- "leaf-list" |
- "action" |
- "container" |
- "leaf" |
- "list" |
- "notification"
-
-case_entry =
- {"kind": "case",
- "name": ,
- "children":
- }
-```
-
-This is a fairly complex piece of JSON but it essentially maps what is seen in a YANG module. Keep that in mind when scrutinizing the above.
-
-The `meta` param contains meta-information about the YANG module such as namespace and prefix but it also contains type stack information for each type used in the YANG module represented in the `data` param. Together with the `meta` param, the `data` param constitutes a complete YANG module in JSON format.
-
-**Example**
-
-{% code title="Example: Method get_schema" %}
-```bash
-curl \
- --cookie "sessionid=sess11635875109111642;" \
- -X POST \
- -H 'Content-Type: application/json' \
- -d '{"jsonrpc": "2.0", "id": 1,
- "method": "get_schema",
- "params": {"th": 2,
- "path": "/aaa:aaa/authentication/users/user{admin}",
- "levels": -1,
- "insert_values": true}}' \
- http://127.0.0.1:8008/jsonrpc
-{"jsonrpc": "2.0",
- "id": 1,
- "result":
- {"meta":
- {"namespace": "http://tail-f.com/ns/aaa/1.1",
- "keypath": "/aaa:aaa/authentication/users/user{admin}",
- "prefix": "aaa",
- "types":
- {"http://tail-f.com/ns/aaa/1.1:passwdStr":
- [{"name": "http://tail-f.com/ns/aaa/1.1:passwdStr"},
- {"name": "MD5DigestString"}]}}},
- "data":
- {"kind": "list-entry",
- "name": "user",
- "qname": "aaa:user",
- "access":
- {"create": true,
- "update": true,
- "delete": true},
- "children":
- [{"kind": "key",
- "name": "name",
- "qname": "aaa:name",
- "info": {"string": "Login name of the user"},
- "mandatory": true,
- "access": {"update": true},
- "type": {"name": "string", "primitive": true}},
- ...]}}
-```
-{% endcode %}
-
-
-
-
-
-hide_schema
-
-`hide_schema` - Hides data that has been adorned with a `hidden` statement in YANG modules. `hidden` statement is an extension defined in the tail-common YANG module (http://tail-f.com/yang/common).
-
-**Params**
-
-```json
-{"th": ,
- "group_name": }
-```
-
-The `group_name` param is as defined by a `hidden` statement in a YANG module.
-
-**Result**
-
-```json
-{}
-```
-
-
-
-
-
-unhide_schema
-
-`unhide_schema` - Unhides data that has been adorned with a `hidden` statement in the YANG modules. `hidden` statement is an extension defined in the tail-common YANG module (http://tail-f.com/yang/common).
-
-**Params**
-
-```json
-{"th": ,
- "group_name": ,
- "passwd": }
-```
-
-The `group_name` param is as defined by a `hidden` statement in a YANG module.
-
-The `passwd` param is a password needed to hide the data that has been adorned with a `hidden` statement. The password is as defined in the `ncs.conf` file.
-
-**Result**
-
-```json
-{}
-```
-
-
-
-
-
-get_module_prefix_map
-
-`get_module_prefix_map` - Returns a map from module name to module prefix.
-
-**Params**
-
-Method takes no parameters.
-
-**Result**
-
-```xml
-
-
-result = {"module-name": "module-prefix"}
-```
-
-**Example**
-
-{% code title="Example: Method get_module_prefix_map" %}
-```bash
-curl \
- --cookie 'sessionid=sess12541119146799620192;' \
- -X POST \
- -H 'Content-Type: application/json' \
- -d '{"jsonrpc": "2.0", id: 1,
- "method": "get_module_prefix_map",
- "params": {}}' \
- http://127.0.0.1:8008/jsonrpc
-
-{"jsonrpc": "2.0",
- "id": 1,
- "result": {
- "cli-builtin": "cli-builtin",
- "confd_cfg": "confd_cfg",
- "iana-crypt-hash": "ianach",
- "ietf-inet-types": "inet",
- "ietf-netconf": "nc",
- "ietf-netconf-acm": "nacm",
- "ietf-netconf-monitoring": "ncm",
- "ietf-netconf-notifications": "ncn",
- "ietf-netconf-with-defaults": "ncwd",
- "ietf-restconf": "rc",
- "ietf-restconf-monitoring": "rcmon",
- "ietf-yang-library": "yanglib",
- "ietf-yang-types": "yang",
- "tailf-aaa": "aaa",
- "tailf-acm": "tacm",
- "tailf-common-monitoring2": "tfcg2",
- "tailf-confd-monitoring": "tfcm",
- "tailf-confd-monitoring2": "tfcm2",
- "tailf-kicker": "kicker",
- "tailf-netconf-extensions": "tfnce",
- "tailf-netconf-monitoring": "tncm",
- "tailf-netconf-query": "tfncq",
- "tailf-rest-error": "tfrerr",
- "tailf-rest-query": "tfrestq",
- "tailf-rollback": "rollback",
- "tailf-webui": "webui",
- }
-}
-```
-{% endcode %}
-
-
-
-
-
-run_action
-
-`run_action` - Invokes an action or RPC defined in a YANG module.
-
-**Params**
-
-```json
-{"th": ,
- "path": ,
- "params":
- "format": <"normal" | "bracket" | "json", default: "normal">,
- "comet_id": ,
- "handle": ,
- "details": <"normal" | "verbose" | "very_verbose" | "debug", optional>}
-```
-
-Actions are as specified in the YANG module, i.e. having a specific name and a well-defined set of parameters and result. The `path` param is a keypath pointing to an action or RPC and the `params` param is a JSON object with action parameters.
-
-The `format` param defines if the result should be an array of key values or a pre-formatted string in bracket format as seen in the CLI. The result is also as specified by the YANG module.
-
-Both a `comet_id` and `handle` need to be provided in order to receive notifications.
-
-The `details` param can be given together with `comet_id` and `handle` in order to get a progress trace for the action. `details` specifies the verbosity of the progress trace. After the action has been invoked, the `comet` method can be used to get the progress trace for the action. If the `details` param is omitted progress trace will be disabled.
-
-The `debug` param can be used the same way as the `details` param to get debug trace events for the action. These are the same trace events that can be displayed in the CLI with the "debug" pipe command when invoking the action. The `debug` param is an array with all debug flags for which debug events should be displayed. Valid values are "service", "template", "xpath", "kicker", and "subscriber". Any other values will result in "invalid params" error. The `debug` param can be used together with the `details` param to get both progress and debug trace events for the operation.
-
-The `debug_service_name` and `debug_template_name` params can be used to specify a service or template name respectively for which to display debug events.
-
-**Note**_:_ This method is often used to call an action that uploads binary data (e.g. images) and retrieving them at a later time. While retrieval is not a problem, uploading is a problem, because JSON-RPC request payloads have a size limitation (e.g. 64 kB). The limitation is needed for performance concerns because the payload is first buffered before the JSON string is parsed and the request is evaluated. When you have scenarios that need binary uploads, please use the CGI functionality instead which has a size limitation that can be configured, and which is not limited to JSON payloads, so one can use streaming techniques.
-
-**Result**
-
-```xml
-
-
-result = {"name": , "value": }
-```
-
-**Errors (specific)**
-
-```json
-{"type": "action.invalid_result", "data": {"path": }}
-```
-
-**Example**
-
-{% code title="Example: Method run_action" %}
-```bash
-curl \
- --cookie 'sessionid=sess12541119146799620192;' \
- -X POST \
- -H 'Content-Type: application/json' \
- -d '{"jsonrpc": "2.0", id: 1,
- "method": "run_action",
- "params": {"th": 2,
- "path": "/dhcp:dhcp/set-clock",
- "params": {"clockSettings": "2014-02-11T14:20:53.460%2B01:00"}}}' \
- http://127.0.0.1:8008/jsonrpc
-
-{"jsonrpc": "2.0",
- "id": 1,
- "result": [{"name":"systemClock", "value":"0000-00-00T03:00:00+00:00"},
- {"name":"inlineContainer/bar", "value":"false"},
- {"name":"hardwareClock","value":"0000-00-00T04:00:00+00:00"}]}
-curl \
- -s \
- --cookie 'sessionid=sess12541119146799620192;' \
- -X POST \
- -H 'Content-Type: application/json' \
- -d'{"jsonrpc": "2.0", "id": 1,
- "method": "run_action",
- "params": {"th": 2,
- "path": "/dhcp:dhcp/set-clock",
- "params": {"clockSettings":
- "2014-02-11T14:20:53.460%2B01:00"},
- "format": "bracket"}}' \
- http://127.0.0.1:8008/jsonrpc
-
-{"jsonrpc": "2.0",
- "id": 1,
- "result": "systemClock 0000-00-00T03:00:00+00:00\ninlineContainer {\n \
- bar false\n}\nhardwareClock 0000-00-00T04:00:00+00:00\n"}
-
-curl \
- -s \
- --cookie 'sessionid=sess12541119146799620192;' \
- -X POST \
- -H 'Content-Type: application/json' \
- -d'{"jsonrpc": "2.0", "id": 1,
- "method": "run_action",
- "params": {"th": 2,
- "path": "/dhcp:dhcp/set-clock",
- "params": {"clockSettings":
- "2014-02-11T14:20:53.460%2B01:00"},
- "format": "json"}}' \
- http://127.0.0.1:8008/jsonrpc
-
-{"jsonrpc": "2.0",
- "id": 1,
- "result": {"systemClock": "0000-00-00T03:00:00+00:00",
- "inlineContainer": {"bar": false},
- "hardwareClock": "0000-00-00T04:00:00+00:00"}}
-```
-{% endcode %}
-
-
-
-### Session
-
-
-
-login
-
-`login` - Creates a user session and sets a browser cookie.
-
-**Params**
-
-```json
-{}
-```
-
-```json
-{"user": , "passwd": , "ack_warning": }
-```
-
-There are two versions of the `login` method. The method with no parameters only invokes Package Authentication, since credentials can be supplied with the whole HTTP request. The method with parameters is used when credentials may need to be supplied with the method parameters, this method invokes all authentication methods including Package Authentication.
-
-The `user` and `passwd` are the credentials to be used in order to create a user session. The common AAA engine in NSO is used to verify the credentials.
-
-If the method fails with a warning, the warning needs to be displayed to the user, along with a checkbox to allow the user to acknowledge the warning. The acknowledgment of the warning translates to setting `ack_warning` to `true`.
-
-**Result**
-
-```json
-{"warning": }
-```
-
-**Note**_:_ The response will have a \`Set-Cookie\` HTTP header with a `sessionid` cookie which will be your authentication token for upcoming JSON-RPC requests.
-
-The `warning` is a free-text string that should be displayed to the user after a successful login. This is not to be mistaken with a failed login that has a `warning` as well. In case of a failure, the user should also acknowledge the warning, not just have it displayed for optional reading.
-
-**Multi-factor authentication**
-
-```json
-{"challenge_id": , "challenge_prompt": }
-```
-
-**Note**_:_ A challenge response will have a `challenge_id` and `challenge_prompt` which needs to be responded to with an upcoming JSON-RPC `challenge_response` requests.
-
-**Note**: The `challenge_prompt` may be a multi-line, why it is base64 encoded.
-
-**Example**
-
-{% code title="Example: Method login" %}
-```bash
-curl \
- -X POST \
- -H 'Content-Type: application/json' \
- -d '{"jsonrpc": "2.0", "id": 1,
- "method": "login",
- "params": {"user": "joe",
- "passwd": "SWkkasE32"}}' \
- http://127.0.0.1:8008/jsonrpc
-
-{"jsonrpc": "2.0",
- "id": 1,
- "error":
- {"code": -32000,
- "type": "rpc.method.failed",
- "message": "Method failed"}}
-
-curl \
- -X POST \
- -H 'Content-Type: application/json' \
- -d '{"jsonrpc": "2.0", "id": 1,
- "method": "login",
- "params": {"user": "admin",
- "passwd": "admin"}}' \
- http://127.0.0.1:8008/jsonrpc
-
-{"jsonrpc": "2.0",
- "id": 1,
- "result": {}}
-```
-{% endcode %}
-
-**Note**_:_ `sessionid` cookie is set at this point in your User Agent (browser). In our examples, we set the cookie explicitly in the upcoming requests for clarity.
-
-```bash
-curl \
- --cookie "sessionid=sess4245223558720207078;" \
- -X POST \
- -H 'Content-Type: application/json' \
- -d '{"jsonrpc": "2.0", "id": 1,
- "method": "get_trans"}' \
- http://127.0.0.1:8008/jsonrpc
-
-{"jsonrpc": "2.0",
- "id": 1,
- "result": {"trans": []}}
-```
-
-
-
-
-
-challenge_response
-
-`challenge_response` - Creates a user session and sets a browser cookie.
-
-**Params**
-
-```json
-{"challenge_id": , "response": , "ack_warning": }
-```
-
-The `challenge_id` and `response` is the multi-factor response to be used in order to create a user session. The common AAA engine in NSO is used to verify the response.
-
-If the method fails with a warning, the warning needs to be displayed to the user, along with a checkbox to allow the user to acknowledge the warning. The acknowledgment of the warning translates to setting `ack_warning` to `true`.
-
-**Result**
-
-```json
-{"warning": }
-```
-
-**Note**_:_ The response will have a \`Set-Cookie\` HTTP header with a `sessionid` cookie which will be your authentication token for upcoming JSON-RPC requests.
-
-The `warning` is a free-text string that should be displayed to the user after a successful challenge response. This is not to be mistaken with a failed challenge response that has a `warning` as well. In case of a failure, the user should also acknowledge the warning, not just have it displayed for optional reading.
-
-**Example**
-
-{% code title="Example: Method challenge-response" %}
-```bash
-curl \
- -X POST \
- -H 'Content-Type: application/json' \
- -d '{"jsonrpc": "2.0", "id": 1,
- "method": "challenge_response",
- "params": {"challenge_id": "123",
- "response": "SWkkasE32"}}' \
- http://127.0.0.1:8008/jsonrpc
-
-{"jsonrpc": "2.0",
- "id": 1,
- "error":
- {"code": -32000,
- "type": "rpc.method.failed",
- "message": "Method failed"}}
-
-curl \
- -X POST \
- -H 'Content-Type: application/json' \
- -d '{"jsonrpc": "2.0", "id": 1,
- "method": "challenge_response",
- "params": {"challenge_id": "123",
- "response": "SWEddrk1"}}' \
- http://127.0.0.1:8008/jsonrpc
-
- {"jsonrpc": "2.0",
- "id": 1,
- "result": {}}
-```
-{% endcode %}
-
-**Note**_:_ `sessionid` cookie is set at this point in your User Agent (browser). In our examples, we set the cookie explicitly in the upcoming requests for clarity.
-
-```bash
-curl \
- --cookie "sessionid=sess4245223558720207078;" \
- -X POST \
- -H 'Content-Type: application/json' \
- -d '{"jsonrpc": "2.0", "id": 1,
- "method": "get_trans"}' \
- http://127.0.0.1:8008/jsonrpc
-
-{"jsonrpc": "2.0",
- "id": 1,
- "result": {"trans": []}}
-```
-
-
-
-
-
-logout
-
-`logout` - Removes a user session and invalidates the browser cookie.
-
-The HTTP cookie identifies the user session so no input parameters are needed.
-
-**Params**
-
-None.
-
-**Result**
-
-```json
-{}
-```
-
-**Example**
-
-{% code title="Example: Method logout" %}
-```bash
-curl \
- --cookie "sessionid=sess4245223558720207078;" \
- -X POST \
- -H 'Content-Type: application/json' \
- -d '{"jsonrpc": "2.0", "id": 1,
- "method": "logout"}' \
- http://127.0.0.1:8008/jsonrpc
-
-{"jsonrpc": "2.0",
- "id": 1,
- "result": {}}
-
-curl \
- --cookie "sessionid=sess4245223558720207078;" \
- -X POST \
- -H 'Content-Type: application/json' \
- -d '{"jsonrpc": "2.0", "id": 1,
- "method": "logout"}' \
- http://127.0.0.1:8008/jsonrpc
-
-{"jsonrpc": "2.0",
- "id": 1,
- "error":
- {"code": -32000,
- "type": "session.invalid_sessionid",
- "message": "Invalid sessionid"}}
-```
-{% endcode %}
-
-
-
-
-
-kick_user
-
-`kick_user` - Kills a user session, i.e. kicking out the user.
-
-**Params**
-
-```json
-{"user": }
-```
-
-The `user` param is either the username of a logged-in user or session ID.
-
-**Result**
-
-```json
-{}
-```
-
-
-
-### Session Data
-
-
-
-get_session_data
-
-`get_session_data` - Gets session data from the session store.
-
-**Params**
-
-```json
-{"key": }
-```
-
-The `key` param for which to get the stored data for. Read more about the session store in the `put_session_data` method.
-
-**Result**
-
-```json
-{"value": }
-```
-
-
-
-
-
-put_session_data
-
-`put_session_data` - Puts session data into the session store. The session store is a small key-value server-side database where data can be stored under a unique key. The data may be an arbitrary object, but not a function object. The object is serialized into a JSON string and then stored on the server.
-
-**Params**
-
-```json
-{"key": ,
- "value": }
-```
-
-The key param is the unique key for which the data in the `value` param is to be stored.
-
-**Result**
-
-```json
-{}
-```
-
-
-
-
-
-erase_session_data
-
-`erase_session_data` - Erases session data previously stored with `put_session_data`.
-
-**Params**
-
-```json
-{"key": }
-```
-
-The `key` param for which all session data will be erased. Read more about the session store in the `put_session_data` method.
-
-**Result**
-
-```json
-{}
-```
-
-
-
-### Transaction
-
-
-
-get_trans
-
-`get_trans` - Lists all transactions.
-
-**Params**
-
-None.
-
-**Result**
-
-```json
-{"trans": }
-
-transaction =
- {"db": <"running" | "startup" | "candidate">,
- "mode": <"read" | "read_write", default: "read">,
- "conf_mode": <"private" | "shared" | "exclusive", default: "private">,
- "tag": ,
- "th": }
-```
-
-**Example**
-
-{% code title="Example: Method get_trans" %}
-```bash
-curl \
- --cookie 'sessionid=sess12541119146799620192;' \
- -X POST \
- -H 'Content-Type: application/json' \
- -d '{"jsonrpc": "2.0", "id": 1,
- "method": "get_trans"}' \
- http://127.0.0.1:8008/jsonrpc
-
-{"jsonrpc": "2.0",
- "id": 1,
- "result":
- {"trans":
- [{"db": "running",
- "th": 2}]}}
-```
-{% endcode %}
-
-
-
-
-
-new_trans
-
-Creates a new transaction.
-
-**Params**
-
-```json
-{"db": <"startup" | "running" | "candidate", default: "running">,
- "mode": <"read" | "read_write", default: "read">,
- "conf_mode": <"private" | "shared" | "exclusive", default: "private">,
- "tag": ,
- "action_path": ,
- "th": ,
- "on_pending_changes": <"reuse" | "reject" | "discard", default: "reuse">}
-```
-
-The `conf_mode` param specifies which transaction semantics to use when it comes to lock and commit strategies. These three modes mimic the modes available in the CLI.
-
-The meaning of `private`, `shared`, and `exclusive` have slightly different meaning depending on how the system is configured; with a writable running, startup, or candidate configuration.
-
-* `private` (\*writable running enabled\*) - Edit a private copy of the running configuration, no lock is taken.
-
-- `private` (\*writable running disabled, startup enabled\*) - Edit a private copy of the startup configuration, no lock is taken.
-
-* `exclusive` (\*candidate enabled\*) - Lock the running configuration and the candidate configuration and edit the candidate configuration.
-
-- `exclusive` (\*candidate disabled, startup enabled\*) - Lock the running configuration (if enabled) and the startup configuration and edit the startup configuration.
-
-* `shared` (\*writable running enabled, candidate enabled\*) - Is a deprecated setting.
-
-The `tag` param is a way to tag transactions with a keyword so that they can be filtered out when you call the `get_trans` method.
-
-The `action_path` param is a keypath pointing to an action or RPC. Use `action_path` when you need to read action/rpc input parameters.
-
-The `th` param is a way to create transactions within other `read_write` transactions. Note that it should always be possible to commit a child transaction (the transaction-in-transaction) to the parent transaction (the original transaction), even if no validation has been done on the child transaction, or if the validation failed due to invalid configuration. Validation on the child transaction is still possible in order to determine if the transaction is valid.
-
-The `on_pending_changes` param decides what to do if the candidate already has been written to, e.g. a CLI user has started a shared configuration session and changed a value in the configuration (without committing it). If this parameter is omitted, the default behavior is to silently reuse the candidate. If `reject` is specified, the call to the `new_trans` method will fail if the candidate is non-empty. If `discard` is specified, the candidate is silently cleared if it is non-empty.
-
-**Result**
-
-```json
-{"th": }
-```
-
-A new transaction handler ID.
-
-**Errors (specific)**
-
-```json
-{"type": "trans.confirmed_commit_in_progress"}
-{"type": "db.locked", "data": {"sessions": }}
-```
-
-The \`data.sessions\` param is an array of strings describing the current sessions of the locking user, e.g., an array of "admin tcp (cli from 192.245.2.3) on since 2006-12-20 14:50:30 exclusive".
-
-**Example**
-
-{% code title="Example: Method new_trans" %}
-```bash
-curl \
- --cookie 'sessionid=sess12541119146799620192;' \
- -X POST \
- -H 'Content-Type: application/json' \
- -d '{"jsonrpc": "2.0", "id": 1,
- "method": "new_trans",
- "params": {"db": "running",
- "mode": "read"}}' \
- http://127.0.0.1:8008/jsonrpc
-
-{"jsonrpc": "2.0",
- "id": 1,
- "result": 2}
-```
-{% endcode %}
-
-
-
-
-
-delete_trans
-
-`delete_trans` - Deletes a transaction created by `new_trans` or `new_webui_trans`_._
-
-**Params**
-
-```json
-{"th": }
-```
-
-**Result**
-
-```json
-{}
-```
-
-
-
-
-
-set_trans_comment
-
-`set_trans_comment` - Adds a comment to the active read-write transaction. This comment will be stored in rollback files and can be viewed in the `/rollback:rollback-files/file` list. **Note**: From NSO 6.5 it is recommended to instead use the `comment` flag passed to the `validate_commit` or `apply` method which in addition to storing the comment in the rollback file also propagates it down to the devices participating in the transaction.
-
-**Params**
-
-```json
-{"th": }
-```
-
-**Result**
-
-```json
-{}
-```
-
-
-
-
-
-set_trans_label
-
-`set_trans_label` - Adds a label to the active read-write transaction. This label will be stored in rollback files and can be viewed in the `/rollback:rollback-files/file` list.\
-**Note**: From NSO 6.5 it is recommended to instead use the `label` flag passed to the `validate_commit` or `apply` method which in addition to storing the label in the rollback file also sets it in resulting commit queue items and propagates it down to the devices participating in the transaction.
-
-**Params**
-
-```json
-{"th": }
-```
-
-**Result**
-
-```json
-{}
-```
-
-
-
-### Transaction - Changes
-
-
-
-is_trans_modified
-
-`is_trans_modified` - Checks if any modifications have been done to a transaction.
-
-**Params**
-
-```json
-{"th": }
-```
-
-**Result**
-
-```json
-{"modified": }
-```
-
-
-
-
-
-get_trans_changes
-
-`get_trans_changes` - Extracts modifications done to a transaction.
-
-**Params**
-
-```json
-{"th": },
- "output": <"compact" | "legacy", default: "legacy">
-```
-
-The `output` parameter controls the result content. `legacy` format include old and value for all operation types even if their value is undefined. undefined values are represented by an empty string. `compact` format excludes old and value if their value is undefined.
-
-**Result**
-
-```json
-{"changes": }
-
-change =
- {"keypath": ,
- "op": <"created" | "deleted" | "modified" | "value_set">,
- "value": ,
- "old":
- }
-```
-
-The `value` param is only interesting if `op` is set to one of `modified` or `value_set`.
-
-The `old` param is only interesting if `op` is set to `modified`.
-
-**Example**
-
-{% code title="Example: Method get_trans_changes" %}
-```bash
-curl \
- --cookie 'sessionid=sess12541119146799620192;' \
- -X POST \
- -H 'Content-Type: application/json' \
- -d '{"jsonrpc": "2.0", "id": 1,
- "method": "get_trans_changes",
- "params": {"th": 2}}' \
- http://127.0.0.1:8008/jsonrpc
-
-{"jsonrpc": "2.0",
- "id": 1,
- "result":
- [{"keypath":"/dhcp:dhcp/default-lease-time",
- "op": "value_set",
- "value": "100",
- "old": ""}]}
-```
-{% endcode %}
-
-
-
-
-
-validate_trans
-
-`validate_trans` - Validates a transaction.
-
-**Params**
-
-```json
-{"th": }
-```
-
-**Result**
-
-```json
-{}
-```
-
-Or:
-
-```json
-{"warnings": }
-
-warning = {"paths": , "message": }
-```
-
-**Errors (specific)**
-
-```json
-{"type": "trans.resolve_needed", "data": {"users": }}
-```
-
-The `data.users` param is an array of conflicting usernames.
-
-```json
-{"type": "trans.validation_failed", "data": {"errors": }}
-
-error = {"paths": , "message": }
-```
-
-The `data.errors` param points to a keypath that is invalid.
-
-
-
-
-
-get_trans_conflicts
-
-`get_trans_conflicts` - Gets the conflicts registered in a transaction.
-
-**Params**
-
-```json
-{"th": }
-```
-
-**Result**
-
-```json
-{"conflicts:" }
-
-conflict =
- {"keypath": ,
- "op": <"created" | "deleted" | "modified" | "value_set">,
- "value": ,
- "old": }
-```
-
-The `value` param is only interesting if `op` is set to one of `created`, `modified` or `value_set`.
-
-The `old` param is only interesting if `op` is set to `modified`.
-
-
-
-
-
-resolve_trans
-
-`resolve_trans` - Tells the server that the conflicts have been resolved.
-
-**Params**
-
-```json
-{"th": }
-```
-
-**Result**
-
-```json
-{}
-```
-
-
-
-### Transaction - Commit Changes
-
-
-
-validate_commit
-
-`validate_commit` - Validates a transaction before calling `commit`. If this method succeeds (with or without warnings) then the next operation must be a call to either `commit` or `clear_validate_lock`. The configuration will be locked for access by other users until one of these methods is called.
-
-**Params**
-
-```json
-{"th": }
-```
-
-```json
-{"comet_id": }
-```
-
-```json
-{"handle": }
-```
-
-```json
-{"details": <"normal" | "verbose" | "very_verbose" | "debug", optional>}
-```
-
-```json
-{"debug": }
-debug_flags = <"service" | "template" | "xpath" | "kicker" | "subscriber">
-```
-
-```json
-{"debug_service_name": }
-```
-
-```json
-{"debug_template_name": }
-```
-
-```json
-{"flags": }
-flags =
-```
-
-The `comet_id`, `handle`, and `details` params can be given together in order to get progress tracing for the `validate_commit` operation. The same `comet_id` can also be used to get the progress trace for any coming commit operations. In order to get progress tracing for commit operations, these three parameters have to be provided with the `validate_commit` operation. The `details` parameter specifies the verbosity of the progress trace. After the operation has been invoked, the `comet` method can be used to get the progress trace for the operation.
-
-The `debug` param can be used the same way as the `details` param to get debug trace events for the validate\_commit and corresponding commit operation. These are the same trace events that can be displayed in the CLI with the "debug" pipe command for the commit operation. The `debug` param is an array with all debug flags for which debug events should be displayed. The `debug` param can be used together with the `details` param to get both progress and debug trace events for the operation.
-
-The `debug_service_name` and `debug_template_name` params can be used to specify a service or template name respectively for which to display debug events.
-
-See the `commit` method for available flags.
-
-**Note**: If you intend to pass `flags` to the `commit` method, it is recommended to pass the same `flags` to `validate_commit` since they may have an effect during the validate step.
-
-**Result**
-
-```json5
-{}
-```
-
-Or:
-
-```json
-{"warnings": }
-warning = {"paths": , "message": }
-```
-
-**Errors (specific)**
-
-Same as for the `validate_trans` method.
-
-
-
-
-
-clear_validate_lock
-
-`clear_validate_lock` - Releases validate lock taken by `validate_commit`.
-
-**Params**
-
-```json
-{"th": }
-```
-
-**Result**
-
-```json5
-{}
-```
-
-
-
-
-
-commit
-
-`commit` - Commits the configuration into the running datastore.
-
-**Params**
-
-```json
-{"th": }
-```
-
-```json
-{"release_locks": }
-```
-
-```json
-{"rollback-id": }
-```
-
-```json
-{"flags": }
-flags =
-```
-
-If `rollback-id` is set to `true`, the response will include the ID of the rollback file created during the commit if any.
-
-Commit behavior can be changed via an extra `flags` param:
-
-The `flags` param is a list of flags that can change the commit behavior:
-
-* `label=LABEL` - Sets a user-defined label that is visible in rollback files, compliance reports, notifications, and events referencing the transaction and resulting commit queue items. If supported, the label will also be propagated down to the devices participating in the transaction.
-* `comment=COMMENT` - Sets a comment visible in rollback files and compliance reports. If supported, the comment will also be propagated down to the devices participating in the transaction.
-* `dry-run=FORMAT` - Where FORMAT is the desired output format: `xml`, `cli`, or `native`. Validate and display the configuration changes but do not perform the actual commit. Neither CDB nor the devices are affected. Instead, the effects that would have taken place is shown in the returned output.
-* `dry-run-reverse` - Used with the dry-run=native flag this will display the device commands for getting back to the current running state in the network if the commit is successfully executed. Beware that if any changes are done later on the same data the reverse device commands returned are invalid.
-* `confirm-network-state`\
- NSO will check network state as part of the commit. This includes checking device configurations for out-of-band changes and processing such changes according to the out-of-band policy.
-* `confirm-network-state=re-evaluate-policies`\
- In addition to processing the newly found out-of-band device changes, NSO will process again the out-of-band policies for the services that the commit is touching.
-
-- `no-revision-drop` - NSO will not run its data model revision algorithm, which requires all participating managed devices to have all parts of the data models for all data contained in this transaction. Thus, this flag forces NSO to never silently drop any data set operations towards a device.
-- `no-overwrite` - NSO will check that the modified data and the data read when computing the device modifications have not changed on the device compared to NSO's view of the data. Can't be used with no-out-of-sync-check.
-- `no-networking` - Do not send data to the devices; this is a way to manipulate CDB in NSO without generating any southbound traffic.
-- `no-out-of-sync-check` - Continue with the transaction even if NSO detects that a device's configuration is out of sync. It can't be used with no-overwrite.
-- `no-deploy` - Commit without invoking the service create method, i.e., write the service instance data without activating the service(s). The service(s) can later be redeployed to write the changes of the service(s) to the network.
-- `reconcile=OPTION` - Reconcile the service data. All data which existed before the service was created will now be owned by the service. When the service is removed that data will also be removed. In technical terms, the reference count will be decreased by one for everything that existed prior to the service. If manually configured data exists below in the configuration tree, that data is kept unless the option `discard-non-service-config` is used.
-- `use-lsa` - Force handling of the LSA nodes as such. This flag tells NSO to propagate applicable commit flags and actions to the LSA nodes without applying them on the upper NSO node itself. The commit flags affected are `dry-run`, `no-networking`, `no-out-of-sync-check`, `no-overwrite` and `no-revision-drop`.
-- `no-lsa` - Do not handle any of the LSA nodes as such. These nodes will be handled as any other device.
-- `commit-queue=MODE` - Where MODE is: `async`, `sync`, or `bypass`. Commit the transaction data to the commit queue.
- * If the `async` value is set, the operation returns successfully if the transaction data has been successfully placed in the queue.
- * The `sync` value will cause the operation to not return until the transaction data has been sent to all devices, or a timeout occurs.
- * The `bypass` value means that if `/devices/global-settings/commit-queue/enabled-by-default` is `true`, the data in this transaction will bypass the commit queue. The data will be written directly to the devices.
-- `commit-queue-atomic=ATOMIC` - Where `ATOMIC` is: `true` or `false`. Sets the atomic behavior of the resulting queue item. If `ATOMIC` is set to `false`, the devices contained in the resulting queue item can start executing if the same devices in other non-atomic queue items ahead of it in the queue are completed. If set to `true`, the atomic integrity of the queue item is preserved.
-- `commit-queue-block-others` - The resulting queue item will block subsequent queue items, that use any of the devices in this queue item, from being queued.
-- `commit-queue-lock` - Place a lock on the resulting queue item. The queue item will not be processed until it has been unlocked, see the actions `unlock` and `lock` in `/devices/commit-queue/queue-item`. No following queue items, using the same devices, will be allowed to execute as long as the lock is in place.
-- `commit-queue-tag=TAG` - Where `TAG` is a user-defined opaque tag. The tag is present in all notifications and events sent referencing the specific queue item.\
- **Note**: `commit-queue-tag` is deprecated from NSO version 6.5. The `label` flag can be used instead.
-- `commit-queue-timeout=TIMEOUT` - Where `TIMEOUT` is infinity or a positive integer. Specifies a maximum number of seconds to wait for the transaction to be committed. If the timer expires, the transaction data is kept in the commit queue, and the operation returns successfully. If the timeout is not set, the operation waits until completion indefinitely.
-- `commit-queue-error-option=OPTION` - Where `OPTION` is: `continue-on-error`, `rollback-on-error` or `stop-on-error`. Depending on the selected error option NSO will store the reverse of the original transaction to be able to undo the transaction changes and get back to the previous state. This data is stored in the `/devices/commit-queue/completed` tree from where it can be viewed and invoked with the `rollback` action. When invoked, the data will be removed.
- * The `continue-on-error` value means that the commit queue will continue on errors. No rollback data will be created.
- * The `rollback-on-error` value means that the commit queue item will roll back on errors. The commit queue will place a lock with `block-others` on the devices and services in the failed queue item. The `rollback` action will then automatically be invoked when the queue item has finished its execution. The lock is removed as part of the rollback.
- * The `stop-on-error` means that the commit queue will place a lock with `block-others` on the devices and services in the failed queue item. The lock must then either manually be released when the error is fixed or the `rollback` action under `/devices/commit-queue/completed` be invoked.
-
- **Note**: Read about error recovery in [Commit Queue](../../../operation-and-usage/operations/nso-device-manager.md#user_guide.devicemanager.commit-queue) for a more detailed explanation.
-- `trace-id=TRACE_ID` - Use the provided trace ID as part of the log messages emitted while processing. If no trace ID is given, NSO is going to generate and assign a trace ID to the processing.\
- **Note**: `trace-id` is deprecated from NSO version 6.3. Capabilities within Trace Context will provide support for `trace-id`, see the section [TraceContext](json-rpc-api.md#trace-context).
-
-**Note**: Must be preceded by a call to `validate_commit`_._
-
-**Note**: The transaction handler is deallocated as a side effect of this method.
-
-**Result**
-
-Successful commit without any arguments.
-
-```json5
-{}
-```
-
-Successful commit with `rollback-id=true`:
-
-```json
-{"rollback-id": {"fixed": 10001}}
-```
-
-Successful commit with `commit-queue=async`:
-
-```json
-{"commit_queue_id": }
-```
-
-The `commit_queue_id` is returned if the commit entered the commit queue, either by specifying `commit-queue=async` or by enabling it in the configuration.
-
-
-
-
-
-apply
-
-`apply` - Performs validate, prepare and commit/abort in one go.
-
-**Params**
-
-```json
-{"th": }
-```
-
-```json
-{"comet_id": }
-```
-
-```json
-{"handle": }
-```
-
-```json
-{"details": <"normal" | "verbose" | "very_verbose" | "debug", optional>}
-```
-
-```json
-{"debug": }
-debug_flags = <"service" | "template" | "xpath" | "kicker" | "subscriber">
-```
-
-```json
-{"debug_service_name": }
-```
-
-```json
-{"debug_service_name": }
-```
-
-```json
-{"flags": }
-flags =
-```
-
-The `comet_id`, `handle`, and `details` params can be given together in order to get progress tracing for the operation. The `details` parameter specifies the verbosity of the progress trace. After the operation has been invoked, the `comet` method can be used to get the progress trace for the operation.
-
-The `debug` param can be used the same way as the `details` param to get debug trace events. These are the same trace events that can be displayed in the CLI with the "debug" pipe command for the commit operation. The `debug` param is an array with all debug flags for which debug events should be displayed. The `debug` param can be used together with the `details` param to get both progress and debug trace events for the operation.
-
-The `debug_service_name` and `debug_template_name` params can be used to specify a service or template name respectively for which to display debug events.
-
-See the `commit` method for available flags.
-
-**Result**
-
-See result for method `commit`.
-
-
-
-### Transaction - Web UI
-
-
-
-get_webui_trans
-
-`get_webui_trans` - Gets the WebUI read-write transaction.
-
-**Result**
-
-```json
-{"trans": }
-
-trans =
- {"db": <"startup" | "running" | "candidate", default: "running">,
- "conf_mode": <"private" | "shared" | "exclusive", default: "private">,
- "th":
- }
-```
-
-
-
-
-
-new_webui_trans
-
-`new_webui_trans` - Creates a read-write transaction that can be retrieved by `get_webui_trans`.
-
-**Params**
-
-```json
-{"db": <"startup" | "running" | "candidate", default: "running">,
- "conf_mode": <"private" | "shared" | "exclusive", default: "private">
- "on_pending_changes": <"reuse" | "reject" | "discard", default: "reuse">}
-```
-
-See `new_trans` for the semantics of the parameters and specific errors.
-
-The `on_pending_changes` param decides what to do if the candidate already has been written to, e.g. a CLI user has started a shared configuration session and changed a value in the configuration (without committing it). If this parameter is omitted, the default behavior is to silently reuse the candidate. If `reject` is specified, the call to the `new_webui_trans` method will fail if the candidate is non-empty. If `discard` is specified, the candidate is silently cleared if it is non-empty.
-
-**Result**
-
-```json
-{"th": }
-```
-
-A new transaction handler ID.
-
-
-
-### Services
-
-
-
-get_template_variables
-
-`get_template_variables` - Extracts all variables from an NSO service/device template.
-
-**Params**
-
-```json
-{"th": ,
- "name": }
-```
-
-The `name` param is the name of the template to extract variables from.
-
-**Result**
-
-```json
-{"template_variables": }
-```
-
-
-
-
-
-get_service_points
-
-`get_service_points` - List all service points. To be able to get the description part of the response the fxs files needs to be compiled with the flag "--include-doc".
-
-**Result**
-
-```json
-{"description": ,
- "keys": ,
- "path": }
-```
-
-
-
-### Packages
-
-
-
-list_packages
-
-`list_packages` - Lists packages in NSO.
-
-**Params**
-
-```json
-{"status": <"installable" | "installed" | "loaded" | "all", default: "all">}
-```
-
-The `status` param specifies which package status to list:
-
-* `installable` - an array of all packages that can be installed.
-* `installed` - an array of all packages that are installed, but not loaded.
-* `loaded` - an array of all loaded packages.
-* `all` - all of the above is returned.
-
-**Result**
-
-```json
-{"packages": }
-```
-
-
diff --git a/development/connected-topics/README.md b/development/connected-topics/README.md
deleted file mode 100644
index 5e78a72f..00000000
--- a/development/connected-topics/README.md
+++ /dev/null
@@ -1,7 +0,0 @@
----
-description: Miscellaneous topics connected to NSO development.
-icon: object-intersect
----
-
-# Connected Topics
-
diff --git a/development/connected-topics/encryption-keys.md b/development/connected-topics/encryption-keys.md
deleted file mode 100644
index 0dae6080..00000000
--- a/development/connected-topics/encryption-keys.md
+++ /dev/null
@@ -1,91 +0,0 @@
----
-description: Manage and work with NSO encrypted strings.
----
-
-# Encrypted Strings
-
-By using the NSO built-in encrypted YANG extension types `tailf:aes-cfb-128-encrypted-string` or `tailf:aes-256-cfb-128-encrypted-string`, it is possible to store encrypted string values in NSO that can be decrypted. See the [tailf\_yang\_extensions(5)](../../resources/man/tailf_yang_extensions.5.md#yang-types-2) man page for more details on the encrypted string YANG extension types.
-
-## Decrypting the Encrypted Strings
-
-Encrypted string values can only be decrypted using `decrypt()`, when NSO is running with the correct [cryptographic keys](../../administration/advanced-topics/cryptographic-keys.md). Python example:
-
-```python
-import ncs
-import _ncs
-# Install the crypto keys used to decrypt the string
-with ncs.maapi.Maapi() as maapi:
- maapi.install_crypto_keys(maapi.msock)
-# Decrypt the string
-my_decrypted_str = _ncs.decrypt(my_encrypted_str)
-```
-
-## Reading Encryption Keys using an External Command
-
-NSO supports reading encryption keys using an external command instead of storing them in `ncs.conf` to allow for use with external key management systems. For `ncs.conf` details, see the [ncs.conf(5) man page](../../resources/man/ncs.conf.5.md) under `/ncs-config/encrypted-strings`.
-
-To use this feature, set `/ncs-config/encrypted-strings/external-keys/command` to an executable command that will output the keys following the rules described in the following sections. The command will be executed on startup and when NSO reloads the configuration.
-
-If the external command fails during startup, the startup will abort. If the command fails during a reload, the error will be logged, and the previously loaded keys will be kept in the system.
-
-The process of providing encryption keys to NSO can be described by the following three steps:
-
-1. Read the configuration from the environment.
-2. Read encryption keys.
-3. Write encryption keys (or error on standard output).
-
-The value of `/ncs-config/encrypted-strings/external-keys/command-argument` is available in the command as the environment variable `NCS_EXTERNAL_KEYS_ARGUMENT`. The value of this configuration is only used by the configured command.
-
-The external command should return the encryption keys on standard output using the names as shown in the table below. The encryption key values are in hexadecimal format, just as in `ncs.conf`. See the example below for details.
-
-The following table shows the mapping from the name to the path in the configuration.
-
-
Name
Configuration path
AESCFB128_KEY
/ncs-config/encrypted-strings/AESCFB128/key
AES256CFB128_KEY
/ncs-config/encrypted-strings/AES256CFB128/key
-
-To signal an error, including `ERROR=message` is preferred. A non-zero exit code or unsupported line content will also trigger an error. Any form of error will be logged to the development log, and no encryption keys will be available in the system.
-
-Example output providing all supported encryption key configuration settings (do not reuse):
-
-```
-AESCFB128_KEY=2b57c219e47582481b733c1adb84fc2g
-AES256CFB128_KEY=3c687d564e250ad987198d179537af563341357493ed2242ef3b16a881dd608g
-```
-
-Example error output:
-
-```
-ERROR=error message
-```
-
-Below is a complete example of an application written in Python providing encryption keys from a plain text file. The application is included in the [examples.ncs/sdk-api/external-encryption-keys](https://github.com/NSO-developer/nso-examples/tree/6.6/sdk-api/external-encryption-keys) example:
-
-```python
-#!/usr/bin/env python3
-
-import os
-import sys
-
-
-def main():
- key_file = os.getenv('NCS_EXTERNAL_KEYS_ARGUMENT', None)
- if key_file is None:
- error('NCS_EXTERNAL_KEYS_ARGUMENT environment not set')
- if len(key_file) == 0:
- error('NCS_EXTERNAL_KEYS_ARGUMENT is empty')
-
- try:
- with open(key_file, 'r') as f_obj:
- keys = f_obj.read()
- sys.stdout.write(keys)
- except Exception as ex:
- error('unable to open/read {}: {}'.format(key_file, ex))
-
-
-def error(msg):
- print('ERROR={}'.format(msg))
- sys.exit(1)
-
-
-if __name__ == '__main__':
- main()
-```
diff --git a/development/connected-topics/external-logging.md b/development/connected-topics/external-logging.md
deleted file mode 100644
index 07c7bb33..00000000
--- a/development/connected-topics/external-logging.md
+++ /dev/null
@@ -1,61 +0,0 @@
----
-description: Send the log data to an external command.
----
-
-# External Logging
-
-As a development feature, NSO supports sending log data as-is to an external command for reading on standard input. As this is a development feature, there are a few limitations, such as the data sent to the external command is not guaranteed to be processed before the external application is shut down.
-
-## Enabling External Log Processing
-
-The general configuration of the external log processing is done in `ncs.conf`. Global and per-device settings controlling the external log processing for NED trace logs are stored in the CDB.
-
-To enable external log processing, set `/ncs-config/logs/external` to `true` and `/ncs-config/logs/command` to the full path of the command that will receive the log data. The same executable will be used for all log types.
-
-External configuration example:
-
-```xml
-
- true
- ./path/to/log_filter
-
-```
-
-To support the debugging of the external log command behavior, a separate log file is used. This debugging log is configured under `/ncs-config/logs/ext-log`. The example below shows the configuration for `./logs/external.log` with the highest log level set:
-
-```xml
-
- true
- ./logs/external.log
- 7
-
-```
-
-By default, NED trace output is written to the file, preserving backward compatibility. To write NED trace logs to a file for all but the device `test`, which will use external log processing, the following configuration can be entered in the CLI:
-
-```bash
-# devices global-settings trace-output file
-# devices device example trace-output external
-```
-
-When setting both `external` and `file` bits without setting `/ncs-config/logs/external` to `true`, a warning message will be logged to `ext-log`. When only setting the `external` bit, no logging will be done.
-
-## Processing Logs using an External Command
-
-After enabling external log processing, NSO will start one instance of the external command for each configured log destination. Processing of the log data is done by reading from standard input and processing it as required.
-
-The command-line arguments provide information about the log that is being processed and in what format the data is sent.
-
-The example below shows how the configured command `./log_processor` would be executed for NETCONF trace data configured to log in raw mode:
-
-```
-./log_processor 1 log "NETCONF Trace" netconf-trace raw
-```
-
-Command line argument position and meaning:
-
-* `version`: Protocol version, always set to `1`. Added for forward compatibility.
-* `action`: The action being performed. Is always set to `log`. Added for forward compatibility.
-* `name:` Name of the log being processed.
-* `log-type`: Type of log data being processed. For all but NETCONF and NED trace logs, this is set to `system`. Depending on the type of NED one of `ned-trace-java`, `ned-trace-netconf` and `ned-trace-snmp` is used. NETCONF trace is set to `netconf-trace`.
-* `log-mode`: Format of log data being sent. For all but NETCONF and NED trace logs, this will be `raw`. NETCONF and NED trace logs can be pretty-printed, and then the format will be `pretty`.
diff --git a/development/connected-topics/scheduler.md b/development/connected-topics/scheduler.md
deleted file mode 100644
index 7c1e30a3..00000000
--- a/development/connected-topics/scheduler.md
+++ /dev/null
@@ -1,157 +0,0 @@
----
-description: Schedule background tasks in NSO.
----
-
-# Scheduler
-
-NSO includes a native time-based job scheduler suitable for scheduling background work. Tasks can be scheduled to run at particular times or periodically at fixed times, dates, or intervals. It can typically be used to automate system maintenance or administrative tasks.
-
-## Scheduling Periodic Work
-
-A standard Vixie Cron expression is used to represent the periodicity in which the task should run. When the task is triggered, the configured action is invoked on the configured action node instance. The action is run as the user that configured the task.
-
-Example: To schedule a task to run `sync-from` at 2 AM on the 1st of every month, we do:
-
-```bash
-admin(config)# scheduler task sync schedule "0 2 1 * *" \
-action-name sync-from action-node /devices
-```
-
-{% hint style="info" %}
-If the task was added through an XML `init` file, the task will run with the `system` user, which implies that AAA rules will not be applied at all. Thus, the task action will not be able to initiate device communication.
-{% endhint %}
-
-If the action node instance is given as an XPath 1.0 expression, the expression is evaluated with the root as the context node, and the expression must return a node set. The action is then invoked on each node in this node set.
-
-Optionally, action parameters can be configured in XML format to be passed to the action during invocation.
-
-```bash
-admin(config-task-sync)# action-params "ce0ce1"
-admin(config)# commit
-```
-
-Once the task has been configured, you can view the next run times of the task:
-
-```cli
-admin(config)# scheduler task sync get-next-run-times display 3
-next-run-time [ 2017-11-01 02:00:00+00:00 2017-12-01 02:00:00+00:00 2018-01-01 02:00:00+00:00 ]
-```
-
-You could also see if the task is running or not:
-
-```bash
-admin# show scheduler task sync is-running
-is-running false
-```
-
-### Schedule Expression
-
-A standard Vixie Cron expression is a string comprising five fields separated by white space that represents a set of times. The following rules can be used to create an expression.
-
-The table below shows expression rules.
-
-| Field | Allowed values | Allowed special characters |
-| ------------ | --------------- | -------------------------- |
-| Minutes | 0-59 | \* , - / |
-| Hours | 0-23 | \* , - / |
-| Day of month | 1-31 | \* , - / |
-| Month | 1-12 or JAN-DEC | \* , - / |
-| Day of week | 0-6 or SUN-SAT | \* , - / |
-
-The following list describes the legal special characters and how you can use them in a Cron expression.
-
-* Star (`*`). Selects all values within a field. For example, `*` in the minute field selects every minute.
-* Comma _(_`,`_)_. Commas are used to specify additional values. For example, using `MON,WED,FRI` in the day of week field.
-* Hyphen _(_`-`_)_. Hyphens define ranges. For example `1-5` in the day of week field indicates every day between Monday and Friday, inclusive.
-* Forward slash _(_`/`_)_. Slashes can be combined with ranges to specify increments. For example, `*/5` in the minutes field indicates every 5 minutes.
-
-### Scheduling Periodic Compaction
-
-[Compaction](../../administration/advanced-topics/cdb-persistence.md#compaction) in NSO can take a considerable amount of time, during which transactions could be blocked. To avoid disruption, it might be advantageous to schedule compaction during times of low NSO utilization. This can be done using the NSO scheduler and a service. See [examples.ncs/misc/periodic-compaction](https://github.com/NSO-developer/nso-examples/tree/6.6/misc/periodic-compaction) for an example that demonstrates how to create a periodic compaction service that can be scheduled using the NSO scheduler.
-
-## Scheduling Non-recurring Work
-
-The scheduler can also be used to configure non-recurring tasks that will run at a particular time.
-
-```bash
-admin(config)# scheduler task my-compliance-report time 2017-11-01T02:00:00+01:00 \
-action-name check-compliance action-node /reports
-```
-
-A non-recurring task will by default be removed when it has finished executing. It will be up to the action to raise an alarm if an error occurs. The task can also be kept in the task list by setting the `keep` leaf.
-
-## Scheduling in an HA Cluster
-
-In an HA cluster, a scheduled task will by default be run on the primary HA node. By configuring the `ha-mode` leaf a task can be scheduled to run on nodes with a particular HA mode, for example, scheduling a read-only action on the secondary nodes. More specifically, a task can be configured with the `ha-node-id` to only run on a certain node. These settings will not have any effect on a standalone node.
-
-```bash
-admin(config)# scheduler task my-compliance-report schedule "0 2 1 * *" \
-ha-mode secondary ha-node-id secondary-node1 \
-action-name check-compliance action-node /reports
-```
-
-{% hint style="info" %}
-The scheduler is disabled when HA is enabled and when HA mode is `NONE`. See [Mode of Operation](../../administration/management/high-availability.md#ha.moo) in HA for more details.
-{% endhint %}
-
-## Troubleshooting
-
-Troubleshooting information is covered below.
-
-### History Log
-
-To find out whether a scheduled task has run successfully or not, the easiest way is to view the history log of the scheduler. It will display the latest runs of the scheduled task.
-
-```bash
-admin# show scheduler task sync history | notab
-history history-entry 2017-11-01T02:00:00.55003+00:00 0
- duration 0.15
- succeeded true
-history history-entry 2017-12-01T02:00:00.549939+00:00 0
- duration 0.09
- succeeded true
-history history-entry 2017-01-01T02:00:00.550128+00:00 0
- duration 0.01
- succeeded false
- info "Resource device ce0 doesn't exist"
-```
-
-### XPath Log
-
-Detailed information from the XPath evaluator can be enabled and made available in the XPath log. Add the following snippet to `ncs.conf`.
-
-```xml
-
- true
- ./xpath.trace
-
-```
-
-### Devel Log
-
-Error information is written to the development log. The development log is meant to be used as support while developing the application. It is enabled in `ncs.conf`:
-
-```xml
-
- true
-
- ./logs/devel.log
- true
-
-
-trace
-```
-
-### Suspending the Scheduler
-
-While investigating a failure with a scheduled task or performing maintenance on the system, like upgrading, it might be useful to suspend the scheduler temporarily.
-
-```bash
-admin# scheduler suspend
-```
-
-When ready, the scheduler can be resumed.
-
-```bash
-admin# scheduler resume
-```
diff --git a/development/connected-topics/snmp-notification-receiver.md b/development/connected-topics/snmp-notification-receiver.md
deleted file mode 100644
index 6e03e72f..00000000
--- a/development/connected-topics/snmp-notification-receiver.md
+++ /dev/null
@@ -1,161 +0,0 @@
----
-description: Configure NSO to receive SNMP notifications.
----
-
-# SNMP Notification Receiver
-
-NSO can act as an SNMP notification receiver (v1, v2c, v3) for its managed devices. The application can register notification handlers and react to the notifications, for example, by mapping SNMP notifications to NSO alarms.
-
-
SNMP NED Compile Steps
-
-The notification receiver is started in the Java VM by application code, as described below. The application code registers the handlers, which are invoked when a notification is received from a managed device. The NSO operator can enable and disable the notification receiver as needed. The notification receiver is configured in the `/snmp-notification-receiver` subtree.
-
-By default, nothing happens with SNMP notifications. You need to register a function to listen to traps and do something useful with the traps. First of all, SNMP var-binds are typically sparse in information, and in many cases, you want to do enrichment of the information and map the notification to some meaningful state. Sometimes a notification indicates an alarm state change; sometimes it indicates that the configuration of the device has changed. The action based on the two above examples is very different; in the first case, you want to interpret the notification for meaningful alarm information and submit a call to the NSO Alarm Manager. In the second case, you probably want to initiate a `check-sync, compare-config, sync action` sequence.
-
-## Configuring NSO to Receive SNMP Notifications
-
-The NSO operator must enable the SNMP notification receiver and configure the addresses NSO will use to listen for notifications. The primary parameters for the notification receiver are shown below.
-
-```
-+--rw snmp-notification-receiver
- +--rw enabled? boolean
- +--rw listen
- | +--rw udp [ip port]
- | +--rw ip inet:ip-address
- | +--rw port inet:port-number
- +--rw engine-id? snmp-engine-id
-```
-
-The notification reception can be turned on and off using the enabled lead. NSO will listen to notifications at the end points configured in `listen`. There is no need to manually configure the NSO `engine-id`. NSO will do this automatically using the algorithm described in RFC 3411. However, it can be assigned an `engine-id` manually by setting this leaf.
-
-The managed devices must also be configured to send notifications to the NSO addresses.
-
-NSO silently ignores any notification received from unknown devices. By default, NSO uses the `/devices/device/address` leaf, but this can be overridden by setting `/devices/device/snmp-notification-address`.
-
-```
-+--rw device [name]
- | +--rw name string
- | +--rw address inet:host
- | +--rw snmp-notification-address? inet:host
-```
-
-## Built-in Filters
-
-There are some standard built-in filters for the SNMP notification receiver that perform standard tasks:
-
-* Standard filter for suppression of received SNMP events that are not of type `TRAP`, `NOTIFICATION`, or `INFORM`.
-* Standard filter for suppression of notifications emanating from IP addresses outside the defined set of addresses. This filter determines the source IP address first from the `snmpTrapAddress` 1.3.6.1.6.3.18.1.3 varbind if this is set in the PDU, or otherwise from the emanating peer IP address. If the resulting IP address does not match either the `snmp-notification-address` or the `address` leaf of any device in the device model, this notification is discarded.
-* Standard filter that will acknowledge the INFORM notification automatically.
-
-## Notification Handlers
-
-NSO uses the Java package SNMP4J to parse the SNMP PDUs.
-
-Notification Handlers are user-supplied Java classes that implement the `com.tailf.snmp.snmp4j.NotificationHandler` interface. The `processPDU` method is expected to react on the SNMP4J event, e.g. by mapping the PDU to an NSO alarm. The handlers are registered in the `NotificationReceiver`. The `NotificationReceiver` is the main class that, in addition to maintaining the handlers, also has the responsibility to read the NSO SNMP notification configuration and set up `SNMP4J` listeners accordingly.
-
-An example of a notification handler can be found at [examples.ncs/device-management/snmp-notification-receiver](https://github.com/NSO-developer/nso-examples/tree/6.6/device-management/snmp-notification-receiver). This example handler receives notifications and sets an alarm text if the notification is an `IF-MIB::linkDown trap`.
-
-```java
-public class ExampleHandler implements NotificationHandler {
-
- private static Logger LOGGER = LogManager.getLogger(ExampleHandler.class);
-
- /**
- * This callback method is called when a notification is received from
- * Snmp4j.
- *
- * @param event
- * a CommandResponderEvent, see Snmp4j javadoc for details
- * @param opaque
- * any object passed in register()
- */
- public HandlerResponse
- processPdu(EventContext context,
- CommandResponderEvent event,
- Object opaque)
- throws Exception {
-
- String alarmText = "test alarm";
-
- PDU pdu = event.getPDU();
- for (int i = 0; i < pdu.size(); i++) {
- VariableBinding vb = pdu.get(i);
- LOGGER.info(vb.toString());
-
- if (vb.getOid().toString().equals("1.3.6.1.6.3.1.1.4.1.0")) {
- String linkStatus = vb.getVariable().toString();
- if ("1.3.6.1.6.3.1.1.5.3".equals(linkStatus)) {
- alarmText = "IF-MIB::linkDown";
- }
- }
- }
-
- String device = context.getDeviceName();
- String managedObject = "/devices/device{"+device+"}";
- ConfIdentityRef alarmType =
- new ConfIdentityRef(new NcsAlarms().hash(),
- NcsAlarms._connection_failure);
- PerceivedSeverity severity = PerceivedSeverity.MAJOR;
- ConfDatetime timeStamp = ConfDatetime.getConfDatetime();
-
- Alarm al = new Alarm(new ManagedDevice(device),
- new ManagedObject(managedObject),
- alarmType,
- severity,
- false,
- alarmText,
- null,
- null,
- null,
- timeStamp);
-
- AlarmSink sink = new AlarmSink();
- sink.submitAlarm(al);
-
- return HandlerResponse.CONTINUE;
- }
-}
-```
-
-The instantiation and start of the `NotificationReceiver` as well as registration of notification handlers are all expected to be done in the same application component of some NSO package. The following is an example of such an application component:
-
-```java
-/**
- * This class starts the Snmp-notification-receiver.
- */
-public class App implements ApplicationComponent {
-
- private ExampleHandler handl = null;
- private NotificationReceiver notifRec = null;
-
- public void run() {
- try {
- notifRec.start();
- synchronized (notifRec) {
- notifRec.wait();
- }
- } catch (Exception e) {
- NcsMain.reportPackageException(this, e);
- }
- }
-
- public void finish() throws Exception {
- if (notifRec == null) {
- return;
- }
- synchronized (notifRec) {
- notifRec.notifyAll();
- }
- notifRec.stop();
- NotificationReceiver.destroyNotificationReceiver();
- }
-
- public void init() throws Exception {
- handl = new ExampleHandler();
- notifRec =
- NotificationReceiver.getNotificationReceiver();
- // register example filter
- notifRec.register(handl, null);
- }
-}
-```
diff --git a/development/connected-topics/web-server.md b/development/connected-topics/web-server.md
deleted file mode 100644
index ddb05035..00000000
--- a/development/connected-topics/web-server.md
+++ /dev/null
@@ -1,258 +0,0 @@
----
-description: Use NSO's embedded web server to deliver dynamic content.
----
-
-# Web Server
-
-This page describes an embedded basic web server that can deliver static and Common Gateway Interface (CGI) dynamic content to a web client, commonly a browser. Due to the limitations of this web server and/or of its configuration capabilities, a proxy server such as Nginx is recommended to address special requirements.
-
-## Web Server Capabilities
-
-The web server can be configured through settings in `ncs.conf` . See the [Manual Pages](../../resources/man/ncs.conf.5.md#configuration-parameters) section Configuration Parameters.
-
-Here is a brief overview of what you can configure on the web server:
-
-* `toggle web server`: the web server can be turned on or off.
-* `toggle transport`: enable HTTP and/or HTTPS, set IPs, ports, redirects, certificates, etc.
-* `hostname`: set the hostname of the web server and decide whether to block requests for other hostnames.
-* `/`: set the `docroot` from where all static content is served.
-* `/login`: set the `docroot` from where static content is served for URL paths starting with `/login`.
-* "/custom": set the `docroot` from where static content is served for URL paths starting with `/custom`.
-* `/cgi`: toggle CGI support and set the `docroot` from where dynamic content is served for URL paths starting with `/cgi`.
-* `non-authenticated paths`: by default, all URL paths, except those needed for the login page are hidden from non-authenticated users; authentication is done by calling the JSON-RPC `login` method.
-* `allow symlinks`: Allow symlinks from under the `docroot`.
-* `cache`: set the cache time window for static content.
-* `log`: several logs are available to configure in terms of file paths—an access log, a full HTTP traffic/trace log, and a browser/JavaScript log.
-* `custom headers`: set custom headers across all static and dynamic content, including requests to `/jsonrpc`.
-
-In addition to what is configurable, the web server also GZip-compresses responses automatically if the browser handles such responses, either by compressing the response on the fly or, if requesting a static file, like `/bigfile.txt`, by responding with the contents of `/bigfile.txt.gz`, if there is such a file.
-
-## CGI Support
-
-The web server includes CGI functionality, disabled by default. Once you enable it in `ncs.conf` (see Configuration Parameters in [Manual Pages](../../resources/man/ncs.conf.5.md#configuration-parameters)), you can write CGI scripts that will be called with the following NSO environment variables prefixed with NCS\_ when a user has logged in via JSON-RPC:
-
-* `JSONRPC_SESSIONID`: the JSON-RPC session id (cookie).
-* `JSONRPC_START_TIME`: the start time of the JSON-RPC session.
-* `JSONRPC_END_TIME`: the end time of the JSON-RPC session.
-* `JSONRPC_READ`: the latest JSON-RPC read transaction.
-* `JSONRPC_READS`: a comma-separated list of JSON-RPC read transactions.
-* `JSONRPC_WRITE`: the latest JSON-RPC write transaction.
-* `JSONRPC_WRITES`: a comma-separated list of JSON-RPC write transactions.
-* `MAAPI_USER`: the MAAPI username.
-* `MAAPI_GROUPS`: a comma-separated list of MAAPI groups.
-* `MAAPI_UID`: the MAAPI UID.
-* `MAAPI_GID`: the MAAPI GID.
-* `MAAPI_SRC_IP`: the MAAPI source IP address.
-* `MAAPI_SRC_PORT`: the MAAPI source port.
-* `MAAPI_USID`: the MAAPI USID.
-* `MAAPI_READ`: the latest MAAPI read transaction.
-* `MAAPI_READS`: a comma-separated list of MAAPI read transactions.
-* `MAAPI_WRITE`: the latest MAAPI write transaction.
-* `MAAPI_WRITES`: a comma-separated list of MAAPI write transactions.
-
-Server or HTTP-specific information is also exported as environment variables:
-
-* `SERVER_SOFTWARE:`
-* `SERVER_NAME:`
-* `GATEWAY_INTERFACE:`
-* `SERVER_PROTOCOL:`
-* `SERVER_PORT:`
-* `REQUEST_METHOD:`
-* `REQUEST_URI:`
-* `DOCUMENT_ROOT:`
-* `DOCUMENT_ROOT_MOUNT:`
-* `SCRIPT_FILENAME:`
-* `SCRIPT_TRANSLATED:`
-* `PATH_INTO:`
-* `PATH_TRANSLATED:`
-* `SCRIPT_NAME:`
-* `REMOTE_ADDR:`
-* `REMOTE_HOST:`
-* `SERVER_ADDR:`
-* `LOCAL_ADDR:`
-* `QUERY_STRING:`
-* `CONTENT_TYPE:`
-* `CONTENT_LENGTH:`
-* `HTTP_*"`: HTTP headers e.g., "Accept" value exported as `HTTP_ACCEPT`.
-
-## Storing TLS Data in the Database
-
-The `tailf-tls.yang` YANG module defines a structure to store TLS data in the database. It is possible to store the private key, the private key's passphrase, the public key certificate, and CA certificates.
-
-To enable the web server to fetch TLS data from the database, `ncs.conf` needs to be configured.
-
-{% code title="Configuring NSO to Read TLS Data from the Database." %}
-```xml
-
-
-
- true
- 0.0.0.0
- 8889
- true
-
-
-
-```
-{% endcode %}
-
-Note that the options `key-file`, `cert-file`, and `ca-cert-file`, are ignored when `read-from-db` is set to true. See the [ncs.conf.5](../../resources/man/ncs.conf.5.md) man page for more details.
-
-The database is populated with TLS data by configuring the `/tailf-tls:tls/private-key, /tailf-tls:tls/certificate`, and, optionally, `/tailf-tls/ca-certificates`. It is possible to use password-protected private keys; then the _passphrase_ leaf in the `private-key` container needs to be set to the password of the encrypted private key. Unencrypted private key data can be supplied in both PKCS#8 and PKCS#1 format, while encrypted private key data needs to be supplied in PKCS#1 format.
-
-In the following example, a password-protected private key, the passphrase, a public key certificate, and two CA certificates are configured with the CLI.
-
-{% code title="Populating the Database with TLS data" %}
-```bash
-
-admin@io> configure
-Entering configuration mode private
-[ok][2019-06-10 19:54:21]
-
-[edit]
-admin@io% set tls certificate cert-data
-():
-[Multiline mode, exit with ctrl-D.]
-> -----BEGIN CERTIFICATE-----
-> MIICrzCCAZcCFBh0ETLcNAFCCEcjSrrd5U4/a6vuMA0GCSqGSIb3DQEBCwUAMBQx
-> ...
-> -----END CERTIFICATE-----
->
-[ok][2019-06-10 19:59:36]
-
-[edit]
-admin@confd% set tls private-key key-data
-():
-[Multiline mode, exit with ctrl-D.]
-> -----BEGIN RSA PRIVATE KEY-----
-> Proc-Type: 4,ENCRYPTED
-> DEK-Info: AES-128-CBC,6E816829A93AAD3E0C283A6C8550B255
-> ...
-> -----END RSA PRIVATE KEY-----
-[ok][2019-06-10 20:00:27]
-
-[edit]
-admin@confd% set tls private-key passphrase
-(): ********
-[ok][2019-06-10 20:00:39]
-
-[edit]
-admin@confd% set tls ca-certificates ca-cert-1 cert-data
-():
-[Multiline mode, exit with ctrl-D.]
-> -----BEGIN CERTIFICATE-----
-> MIIDCTCCAfGgAwIBAgIUbzrNvBdM7p2rxwDBaqF5xN1gfmEwDQYJKoZIhvcNAQEL
-> ...
-> -----END CERTIFICATE-----
-[ok][2019-06-10 20:02:22]
-
-[edit]
-admin@confd% set tls ca-certificates ca-cert-2 cert-data
-():
-[Multiline mode, exit with ctrl-D.]
-> -----BEGIN CERTIFICATE-----
-> MIIDCTCCAfGgAwIBAgIUZ2GcDzHg44c2g7Q0Xlu3H8/4wnwwDQYJKoZIhvcNAQEL
-> ...
-> -----END CERTIFICATE-----
-[ok][2019-06-10 20:03:07]
-
-[edit]
-admin@confd% commit
-Commit complete.
-[ok][2019-06-10 20:03:11]
-
-[edit]
-```
-{% endcode %}
-
-The SHA256 fingerprints of the public key certificate and the CA certificates can be accessed as operational data. The fingerprint is shown as a hex string. The first octet identifies what hashing algorithm is used, _04_ is SHA256, and the following octets is the actual fingerprint.
-
-{% code title="Show TLS Certificate Fingerprints" %}
-```bash
-
-admin@io> show tls
-tls certificate fingerprint 04:65:8a:9e:36:2c:a7:42:8d:93:50:af:97:08:ff:e6:1b:c5:43:a8:2c:b5:bf:79:eb:be:b4:70:88:96:40:22:fd
-NAME FINGERPRINT
---------------------------------------------------------------------------------------------------------------
-cacert-1 04:00:5e:22:f8:4b:b7:3a:47:e7:23:11:80:03:d3:9a:74:8d:09:c0:fa:cc:15:2b:7f:81:1a:e6:80:aa:a1:6d:1b
-cacert-2 04:2d:93:9b:37:21:d2:22:74:ad:d9:99:ae:76:b6:6a:f2:3b:e3:4e:07:32:f2:8b:f0:63:ad:21:7d:5e:db:92:0a
-
-[ok][2019-06-10 20:43:31]
-```
-{% endcode %}
-
-\
-When the database is populated, NSO needs to be reloaded.
-
-```bash
-
-$ ncs --reload
-```
-
-After configuring NSO, populating the database, and reloading, the TLS transport is usable.
-
-```bash
-
-$ curl -kisu admin:admin https://localhost:8889
-HTTP/1.1 302 Found
-...
-```
-
-## Package Upload
-
-The web server includes support for uploading packages to `/package-upload` using `HTTP POST` from the local host to the NSO host, making them installable there. It is disabled by default but can be enabled in `ncs.conf`; see Configuration Parameters in [Manual Pages](../../resources/man/ncs.conf.5.md#configuration-parameters).
-
-By default, only uploading 1 file per request will be processed, and any remaining file parts after that will result in an error, and its content will be ignored. To allow multiple files in a request, you can increase `/ncs-config/webui/package-upload/max-files`.
-
-{% code title="Valid Package Example" %}
-```bash
-curl \
- --cookie 'sessionid=sess12541119146799620192;' \
- -X POST \
- -H "Cache-Control: no-cache" \
- -F "upload=@path/to/some-valid-package.tar.gz" \
- http://127.0.0.1:8080/package-upload
-[
- {
- "result": {
- "filename": "some-valid-package.tar.gz"
- }
- }
-]
-```
-{% endcode %}
-
-{% code title="Invalid Package Example" %}
-```bash
-curl \
- --cookie 'sessionid=sess12541119146799620192;' \
- -X POST \
- -H "Cache-Control: no-cache" \
- -F "upload=@path/to/some-invalid-package.tar.gz" \
- http://127.0.0.1:8080/package-upload
-[
- {
- "error": {
- "filename": "some-invalid-package.tar.gz",
- "data": {
- "reason": "Invalid package contents"
- }
- }
- }
-]
-```
-{% endcode %}
-
-The AAA infrastructure can be used to restrict access to library functions using command rules:
-
-```xml
-
-deny-package-upload
-webui
-::webui:: package-upload
-exec
-deny
-
-```
-
-Note how the command is prefixed with `::webui::`. This tells the AAA engine to apply the command rule to WebUI API functions. You can read more about command rules in [AAA infrastructure](../../administration/management/aaa-infrastructure.md).
diff --git a/development/core-concepts/README.md b/development/core-concepts/README.md
deleted file mode 100644
index faab7fc6..00000000
--- a/development/core-concepts/README.md
+++ /dev/null
@@ -1,7 +0,0 @@
----
-description: Key concepts in NSO development.
-icon: bandage
----
-
-# Core Concepts
-
diff --git a/development/core-concepts/api-overview/README.md b/development/core-concepts/api-overview/README.md
deleted file mode 100644
index a1ab2aa0..00000000
--- a/development/core-concepts/api-overview/README.md
+++ /dev/null
@@ -1,6 +0,0 @@
----
-description: Overview of NSO APIs.
----
-
-# API Overview
-
diff --git a/development/core-concepts/api-overview/java-api-overview.md b/development/core-concepts/api-overview/java-api-overview.md
deleted file mode 100644
index 0c4825ff..00000000
--- a/development/core-concepts/api-overview/java-api-overview.md
+++ /dev/null
@@ -1,1577 +0,0 @@
----
-description: Learn about the NSO Java API and its usage.
----
-
-# Java API Overview
-
-The NSO Java library contains a variety of APIs for different purposes. In this section, we introduce these and explain their usage. The Java library deliverables are found as two jar files (`ncs.jar` and `conf-api.jar`). The jar files and their dependencies can be found under `$NCS_DIR/java/jar/`.
-
-For convenience, the Java build tool Apache ant ([https://ant.apache.org/](https://ant.apache.org/)) is used to run all of the examples. However, this tool is not a requirement for NSO.
-
-General for all APIs is that they communicate with NSO using TCP sockets. This makes it possible to use all APIs from a remote location.
-
-The following APIs are included in the library:
-
-
MAAPI (Management Agent API) Northbound interface that is transactional and user session-based. Using this interface both configuration and operational data can be read. Configuration data can be written and committed as one transaction. The API is complete in the way that it is possible to write a new northbound agent using only this interface. It is also possible to attach to ongoing transactions in order to read uncommitted changes and/or modify data in these transactions.
CDB API The southbound interface provides access to the CDB configuration database. Using this interface configuration data can be read. In addition, operational data that is stored in CDB can be read and written. This interface has a subscription mechanism to subscribe to changes. A subscription is specified on a path that points to an element in a YANG model or an instance in the instance tree. Any change under this point will trigger the subscription. CDB has also functions to iterate through the configuration changes when a subscription has been triggered.
DP API Southbound interface that enables callbacks, hooks, and transforms. This API makes it possible to provide the service callbacks that handle service-to-device mapping logic. Other usual cases are external data providers for operational data or action callback implementations. There are also transaction and validation callbacks, etc. Hooks are callbacks that are fired when certain data is written and the hook is expected to do additional modifications of data. Transforms are callbacks that are used when complete mediation between two different models is necessary.
NED API (Network Element Driver) Southbound interface that mediates communication for devices that do not speak either NETCONF or SNMP. All prepackaged NEDs for different devices are written using this interface. It is possible to use the same interface to write your own NED. There are two types of NEDs, CLI NEDs and Generic NEDs. CLI NEDs can be used for devices that can be controlled by a Cisco-style CLI syntax, in this case the NED is developed primarily by building a YANG model and a relatively small part in Java. In other cases the Generic NED can be used for any type of communication protocol.
NAVU API (Navigation Utilities) API that resides on top of the Maapi and Cdb APIs. It provides schema model navigation and instance data handling (read/write). Uses either a Maapi or Cdb context as data access and incorporates a subset of functionality from these (navigational and data read/write calls). Its major use is in service implementations which normally is about navigating device models and setting device data.
ALARM API Eastbound API that is used both to consume and produce alarms in alignment with the NSO Alarm model. To consume alarms the AlarmSource interface is used. To produce a new alarm the AlarmSink interface is used. There is also a possibility to buffer produced alarms and make asynchronous writes to CDB to improve alarm performance.
NOTIF API Northbound API that is used to subscribe to system events from NSO. These events are generated for audit log events, for different transaction states, for HA state changes, upgrade events, user sessions, etc.
HA API (High Availability) Northbound api used to manage a High Availability cluster of NSO instances. An NSO instance can be in one of three states NONE, PRIMARY or SECONDARY. With the HA API the state can be queried and changed for NSO instances in the cluster.
-
-In addition, the Conf API framework contains utility classes for data types, keypaths, etc.
-
-## MAAPI
-
-The Management Agent API (MAAPI) provides an interface to the Transaction engine in NSO. As such it is very versatile. Here are some examples of how the MAAPI interface can be used.
-
-* Read and write configuration data stored by NSO or in an external database.
-* Write our own northbound interface.
-* We could access data inside a not yet committed transaction, e.g. as validation logic where our Java code can attach itself to a running transaction and read through the not yet committed transaction, and validate the proposed configuration change.
-* During database upgrade we can access and write data to a special upgrade transaction.
-
-The first step of a typical sequence of MAAPI API calls when writing a management application would be to create a user session. Creating a user session is the equivalent of establishing an SSH connection from a NETCONF manager. It is up to the MAAPI application to authenticate users. The TCP connection between MAAPI and NSO is neither encrypted, nor authenticated. The Maapi Java package does however include an `authenticate()` method that can be used by the application to hook into the AAA framework of NSO and let NSO authenticate the user.
-
-{% code title="Example: Establish a MAAPI Connection" %}
-```
- Socket socket = new Socket("localhost",Conf.NCS_PORT);
- Maapi maapi = new Maapi(socket);
-```
-{% endcode %}
-
-When a Maapi socket has been created the next step is to create a user session and supply the relevant information about the user for authentication.
-
-{% code title="Example: Starting a User Session" %}
-```
- maapi.startUserSession("admin", "maapi", new String[] {"admin"});
-```
-{% endcode %}
-
-When the user has been authenticated and a user session has been created the Maapi reference is now ready to establish a new transaction toward a data store. The following code snippet starts a read/write transaction towards the running data store.
-
-{% code title="Example: Start a Read/Write transaction Towards Running" %}
-```
- int th = maapi.startTrans(Conf.DB_RUNNING,
- Conf.MODE_READ_WRITE);
-```
-{% endcode %}
-
-\\
-
-The `startTrans(int db,int mode)` method of the Maapi class returns an integer that represents a transaction handler. This transaction handler is used when invoking the various Maapi methods.
-
-An example of a typical transactional method is the `getElem()` method:
-
-{% code title="Example: Maapi.getElem()" %}
-```java
- public ConfValue getElem(int tid,
- String fmt,
- Object... arguments)
-```
-{% endcode %}
-
-The `getElem(int th, String fmt, Object ... arguments)` first parameter is the transaction handle which is the integer that was returned by the `startTrans()` method. The _`fmt`_ is a path that leads to a leaf in the data model. The path is expressed as a format string that contain fixed text with zero to many embedded format specifiers. For each specifier, one argument in the variable argument list is expected.
-
-The currently supported format specifiers in the Java API are:
-
-* `%d` - requiring an integer parameter (type int) to be substituted.
-* `%s` - requiring a `java.lang.String` parameter to be substituted.
-* `%x` - requiring subclasses of type `com.tailf.conf.ConfValue` to be substituted.
-
-```
- ConfValue val = maapi.getElem(th,
- "/hosts/host{%x}/interfaces{%x}/ip",
- new ConfBuf("host1"),
- new ConfBuf("eth0"));
-```
-
-The return value _`val`_ contains a reference to a `ConfValue` which is a superclass of all the `ConfValues` that maps to the specific yang data type. If the Yang data type `ip` in the Yang model is `ietf-inet-types:ipv4-address`, we can narrow it to the subclass which is the corresponding `com.tailf.conf.ConfIPv4`.
-
-```
- ConfIPv4 ipv4addr = (ConfIPv4)val;
-```
-
-The opposite operation of the `getElem()` is the `setElem()` method which set a leaf with a specific value.
-
-```
- maapi.setElem(th ,
- new ConfUInt16(1500),
- "/hosts/host{%x}/interfaces{%x}/ip/mtu",
- new ConfBuf("host1"),
- new ConfBuf("eth0"));
-```
-
-We have not yet committed the transaction so no modification is permanent. The data is only visible inside the current transaction. To commit the transaction we call:
-
-```
- maapi.applyTrans(th)
-```
-
-The method `applyTrans()` commits the current transaction to the running datastore.
-
-{% code title="Example: Commit a Transaction" %}
-```
- int th = maapi.startTrans(Conf.DB_RUNNING, Conf.MODE_READ_WRITE);
- try {
- maapi.lock(Conf.DB_RUNNING);
- /// make modifications to th
- maapi.setElem(th, .....);
- maapi.applyTrans(th);
- maapi.finishTrans(th);
- } catch(Exception e) {
- maapi.finishTrans(th);
- } finally {
- maapi.unLock(Conf.DB_RUNNING);
- }
-```
-{% endcode %}
-
-It is also possible to run the code above without `lock(Conf.DB_RUNNING)`.
-
-Calling the `applyTrans()` method also performs additional validation of the new data as required by the data model and may fail if the validation fails. You can perform the validation beforehand, using the `validateTrans()` method.
-
-Additionally, applying transaction can fail in case of a conflict with another, concurrent transaction. The best course of action in this case is to retry the transaction. Please see [Handling Conflicts](../nso-concurrency-model.md#ncs.development.concurrency.handling) for details.
-
-The MAAPI is also intended to attach to already existing NSO transaction to inspect not yet committed data for example if we want to implement validation logic in Java. See the example below (Attach Maapi to the Current Transaction).
-
-## CDB API
-
-This API provides an interface to the CDB Configuration database which stores all configuration data. With this API the user can:
-
-* Start a CDB Session to read configuration data.
-* Subscribe to changes in CDB - The subscription functionality makes it possible to receive events/notifications when changes occur in CDB.
-
-CDB can also be used to store operational data, i.e., data which is designated with a "config false" statement in the YANG data model. Operational data is read/write trough the CDB API. NETCONF and the other northbound agents can only read operational data.
-
-Java CDB API is intended to be fast and lightweight and the CDB read Sessions are expected to be short lived and fast. The NSO transaction manager is surpassed by CDB and therefore write operations on configurational data is prohibited. If operational data is stored in CDB both read and write operations on this data is allowed.
-
-CDB is always locked for the duration of the session. It is therefore the responsibility of the programmer to make CDB interactions short in time and assure that all CDB sessions are closed when interaction has finished.
-
-To initialize the CDB API a CDB socket has to be created and passed into the API base class `com.tailf.cdb.Cdb`:
-
-{% code title="Example: Establish a Connection to CDB" %}
-```
- Socket socket = new Socket("localhost", Conf.NCS_PORT);
- Cdb cdb = new Cdb("MyCdbSock",socket);
-```
-{% endcode %}
-
-After the `cdb` socket has been established, a user could either start a CDB Session or start a subscription of changes in CDB:
-
-{% code title="Example: Establish a CDB Session" %}
-```
- CdbSession session = cdb.startSession(CdbDBType.RUNNING);
-
- /*
- * Retrieve the number of children in the list and
- * loop over these children
- */
- for(int i = 0; i < session.getNumberOfInstances("/servers/server"); i++) {
- ConfBuf name =
- (ConfBuf) session.getElem("/servers/server[%d]/hostname", i);
- ConfIPv4 ip =
- (ConfIPv4) session.getElem("/servers/server[%d]/ip", i);
- }
-```
-{% endcode %}
-
-We can refer to an element in a model with an expression like `/servers/server`. This type of string reference to an element is called keypath or just path. To refer to element underneath a list, we need to identify which instance of the list elements that is of interest.
-
-This can be performed either by pinpointing the sequence number in the ordered list, starting from 0. For instance the path: `/servers/server[2]/port` refers to the `port` leaf of the third server in the configuration. This numbering is only valid during the current CDB session. Note, the database is locked during this session.
-
-We can also refer to list instances using the key values for the list. Remember that we specify in the data model which leaf or leafs in list that constitute the key. In our case, a server has the `name` leaf as key. The syntax for keys is a space-separated list of key values enclosed within curly brackets: `{ Key1 Key2 ...}`. So, `/servers/server{www}/ip` refers to the `ip` leaf of the server whose name is `www`.
-
-A YANG list may have more than one key for example the keypath: `/dhcp/subNets/subNet{192.168.128.0 255.255.255.0}/routers` refers to the routers list of the subnet which has key `192.168.128.0`, `255.255.255.0`.
-
-The keypath syntax allows for formatting characters and accompanying substitution arguments. For example, `getElem("server[%d]/ifc{%s}/mtu",2,"eth0")` is using a keypath with a mix of sequence number and keyvalues with formatting characters and argument. Expressed in text the path will reference the MTU of the third server instance's interface named `eth0`.
-
-The `CdbSession` Java class have a number of methods to control current position in the model.
-
-* `CdbSession.cwd()` to get current position.
-* `CdbSession.cd()` to change current position.
-* `CdbSession.pushd()` to change and push a new position to a stack.
-* `CdbSession.popd()` to change back to an stacked position.
-
-Using relative paths and e.g. `CdbSession.pushd()`, it is possible to write code that can be re-used for common sub-trees.
-
-The current position also includes the namespace. If an element of another namespace should be read, then the prefix of that namespace should be set in the first tag of the keypath, like: `/smp:servers/server` where `smp` is the prefix of the namespace. It is also possible to set the default namespace for the CDB session with the method `CdbSession.setNamespace(ConfNamespace)`.
-
-{% code title="Example: Establish a CDB Subscription" %}
-```
- CdbSubscription sub = cdb.newSubscription();
- int subid = sub.subscribe(1, new servers(), "/servers/server/");
-
- // tell CDB we are ready for notifications
- sub.subscribeDone();
-
- // now do the blocking read
- while (true) {
- int[] points = sub.read();
- // now do something here like diffIterate
- .....
- }
-```
-{% endcode %}
-
-The CDB subscription mechanism allows an external Java program to be notified when different parts of the configuration changes. For such a notification, it is also possible to iterate through the change set in CDB for that notification.
-
-Subscriptions are primarily to the running data store. Subscriptions towards the operational data store in CDB is possible, but the mechanism is slightly different see below.
-
-The first thing to do is to register in CDB which paths should be subscribed to. This is accomplished with the `CdbSubscription.subscribe(...)` method. Each registered path returns a subscription point identifier. Each subscriber can have multiple subscription points, and there can be many different subscribers.
-
-Every point is defined through a path - similar to the paths we use for read operations, with the difference that instead of fully instantiated paths to list instances we can choose to use tag paths i.e. leave out key value parts to be able to subscribe on all instances. We can subscribe either to specific leaves, or entire sub trees. Assume a YANG data model on the form of:
-
-```yang
- container servers {
- list server {
- key name;
- leaf name { type string;}
- leaf ip { type inet:ip-address; }
- leaf port type inet:port-number; }
- .....
-```
-
-Explaining this by example we get:
-
-```
-/servers/server/port
-```
-
-A subscription on a leaf. Only changes to this leaf will generate a notification.
-
-```
- /servers
-```
-
-Means that we subscribe to any changes in the subtree rooted at `/servers`. This includes additions or removals of server instances, as well as changes to already existing server instances.
-
-```
- /servers/server{www}/ip
-```
-
-Means that we only want to be notified when the server "www" changes its ip address.
-
-```
- /servers/server/ip
-```
-
-Means we want to be notified when the leaf ip is changed in any server instance.
-
-When adding a subscription point the client must also provide a priority, which is an integer. As CDB is changed, the change is part of a transaction. For example, the transaction is initiated by a commit operation from the CLI or an edit-config operation in NETCONF resulting in the running database being modified. As the last part of the transaction, CDB will generate notifications in lock-step priority order. First, all subscribers at the lowest numbered priority are handled; once they all have replied and synchronized by calling `sync(CdbSubscriptionSyncType synctype)`, the next set - at the next priority level - is handled by CDB. Not until all subscription points have been acknowledged, is the transaction complete.
-
-This implies that if the initiator of the transaction was, for example, a commit command in the CLI, the command will hang until notifications have been acknowledged.
-
-Note that even though the notifications are delivered within the transaction, a subscriber can't reject the changes (since this would break the two-phase commit protocol used by the NSO backplane towards all data providers).
-
-When a client is done subscribing, it needs to inform NSO it is ready to receive notifications. This is done by first calling `subscribeDone()`, after which the subscription socket is ready to be polled.
-
-As a subscriber has read its subscription notifications using `read()`, it can iterate through the changes that caused the particular subscription notification using the `diffIterate()` method.
-
-It is also possible to start a new read-session to the `CDB_PRE_COMMIT_RUNNING` database to read the running database as it was before the pending transaction.
-
-Subscriptions towards the operational data in CDB are similar to the above, but because the operational data store is designed for light-weight access (and thus, does not have transactions and normally avoids the use of any locks), there are several differences, in particular:
-
-* Subscription notifications are only generated if the writer obtains the subscription lock, by using the `startSession()` with the `CdbLockType.LOCKREQUEST`. In addition, when starting a session towards the operation data, we need to pass the `CdbDBType.CDB_OPERATIONAL` when starting a CDB session:\\
-
- ```
- CdbSession sess =
- cdb.startSession(CdbDBType.CDB_OPERATIONAL,
- EnumSet.of(CdbLockType.LOCK_REQUEST));
- ```
-* No priorities are used.
-* Neither the writer that generated the subscription notifications nor other writers to the same data are blocked while notifications are being delivered. However, the subscription lock remains in effect until notification delivery is complete.
-* The previous value for modified leaf is not available when using the `diffIterate()` method.
-
-Essentially a write operation towards the operational data store, combined with the subscription lock, takes on the role of a transaction for configuration data as far as subscription notifications are concerned. This means that if operational data updates are done with many single-element write operations, this can potentially result in a lot of subscription notifications. Thus, it is a good idea to use the multi-element `setObject()` taking an array of ConfValues which sets a complete container or `setValues()` taking an array of `ConfXMLParam` and potent of setting an arbitrary part of the model. This to keep down notifications to subscribers when updating operational data.
-
-Write operations that do not attempt to obtain the subscription lock, are allowed to proceed even during notification delivery. Therefore, it is the responsibility of the programmer to obtain the lock as needed when writing to the operational data store. E.g. if subscribers should be able to reliably read the exact data that resulted from the write that triggered their subscription, the subscription lock must always be obtained when writing that particular set of data elements. One possibility is of course to obtain the lock for all writes to operational data, but this may have an unacceptable performance impact.
-
-To view registered subscribers, use the `ncs --status` command. For details on how to use the different subscription functions, see the Javadoc for NSO Java API.
-
-The code in the [examples.ncs/sdk-api/cdb-java](https://github.com/NSO-developer/nso-examples/tree/6.6/sdk-api/cdb-java) example illustrates three different types of CDB subscribers:
-
-* A simple CDB config subscriber that utilizes the low-level CDB API directly to subscribe to changes in the subtree of the configuration.
-* Two Navu CDB subscribers, one subscribing to configuration changes, and one subscribing to changes in operational data.
-
-## DP API
-
-The DP API makes it possible to create callbacks which are called when certain events occur in NSO. As the name of the API indicates, it is possible to write data provider callbacks that provide data to NSO that is stored externally. However, this is only one of several callback types provided by this API. There exist callback interfaces for the following types:
-
-* Service Callbacks - invoked for service callpoints in the YANG model. Implements service to device information mappings. See, for example, [examples.ncs/service-management/rfs-service](https://github.com/NSO-developer/nso-examples/tree/6.6/service-management/rfs-service).
-* Action Callbacks - invoked for a certain action in the YANG model which is defined with a callpoint directive.
-* Authentication Callbacks - invoked for external authentication functions.
-* Authorization Callbacks - invoked for external authorization of operations and data. Note, avoid this callback if possible since performance will otherwise be affected.
-* Data Callbacks - invoked for data provision and manipulation for certain data elements in the YANG model which is defined with a callpoint directive.
-* DB Callbacks - invoked for external database stores.
-* Range Action Callbacks - A variant of action callback where ranges are defined for the key values.
-* Range Data Callbacks - A variant of data callback where ranges are defined for the data values.
-* Snmp Inform Response Callbacks - invoked for response on Snmp inform requests on a certain element in the Yang model which is defined by a callpoint directive.
-* Transaction Callbacks - invoked for external participants in the two-phase commit protocol.
-* Transaction Validation Callbacks - invoked for external transaction validation in the validation phase of a two-phase commit.
-* Validation Callbacks - invoked for validation of certain elements in the YANG Model which is designed with a callpoint directive.
-
-The callbacks are methods in ordinary java POJOs. These methods are adorned with a specific Java Annotations syntax for that callback type. The annotation makes it possible to add metadata information to NSO about the supplied method. The annotation includes information about which `callType` and, when necessary, which `callpoint` the method should be invoked for.
-
-{% hint style="info" %}
-Only one Java object can be registered on one and the same `callpoint`. Therefore, when a new Java object registers on a `callpoint` that already has been registered, the earlier registration (and Java object) will be silently removed.
-{% endhint %}
-
-### Transaction and Data Callbacks
-
-By default, NSO stores all configuration data in its CDB data store. We may wish to store and configure other data in NSO than what is defined by the NSO built-in YANG models, alternatively, we may wish to store parts of the NSO tree outside NSO (CDB) i.e. in an external database. Say, for example, that we have our customer database stored in a relational database disjunct from NSO. To implement this, we must do a number of things: We must define a callpoint somewhere in the configuration tree, and we must implement what is referred to as a data provider. Also, NSO executes all configuration changes inside transactions and if we want NSO (CDB) and our external database to participate in the same two-phase commit transactions, we must also implement a transaction callback. Altogether, it will appear as if the external data is part of the overall NSO configuration, thus the service model data can refer directly to this external data - typically to validate service instances.
-
-The basic idea for a data provider is that it participates entirely in each NSO transaction, and it is also responsible for reading and writing all data in the configuration tree below the callpoint. Before explaining how to write a data provider and what the responsibilities of a data provider are, we must explain how the NSO transaction manager drives all participants in a lock-step manner through the phases of a transaction.
-
-A transaction has a number of phases, the external data provider gets called in all the different phases. This is done by implementing a transaction callback class and then registering that class. We have the following distinct phases of an NSO transaction:
-
-* `init()`: In this phase, the transaction callback class `init()` methods get invoked. We use annotation on the method to indicate that it's the `init()` method as in:\\
-
- ```java
- public class MyTransCb {
-
- @TransCallback(callType=TransCBType.INIT)
- public void init(DpTrans trans) throws DpCallbackException {
- return;
- }
- ```
-
- \
- Each different callback method we wish to register must be annotated with an annotation from `TransCBType`.
-
- \
- The callback is invoked when a transaction starts, but NSO delays the actual invocation as an optimization. For a data provider providing configuration data, `init()` is invoked just before the first data-reading callback, or just before the `transLock()` callback (see below), whichever comes first. When a transaction has started, it is in a state we refer to as `READ`. NSO will, while the transaction is in the `READ` state, execute a series of read operations towards (possibly) different callpoints in the data provider.
-
- \
- Any write operations performed by the management station are accumulated by NSO and the data provider doesn't see them while in the `READ` state.
-* `transLock()`: This callback gets invoked by NSO at the end of the transaction. NSO has accumulated a number of write operations and will now initiate the final write phases. Once the `transLock()` callback has returned, the transaction is in the `VALIDATE`state. In the `VALIDATE` state, NSO will (possibly) execute a number of read operations to validate the new configuration. Following the read operations for validations comes the invocation of one of the `writeStart()` or `transUnlock()` callbacks.
-* `transUnlock()`: This callback gets invoked by NSO if the validation fails or if the validation was done separately from the commit (e.g. by giving a `validate` command in the CLI). Depending on where the transaction originated, the behavior after a call to `transUnlock()` differs. If the transaction originated from the CLI, the CLI reports to the user that the configuration is invalid and the transaction remains in the `READ` state whereas if the transaction originated from a NETCONF client, the NETCONF operation fails and a NETCONF `rpc` error is reported to the NETCONF client/manager.
-* `writeStart()`: If the validation succeeded, the `writeStart()` callback will be called and the transaction will enter the `WRITE` state. While in `WRITE` state, a number of calls to the write data callbacks `setElem()`, `create()` and `remove()` will be performed.
-
- \
- If the underlying database supports real atomic transactions, this is a good place to start such a transaction.
-
- \
- The application should not modify the real running data here. If, later, the `abort()` callback is called, all write operations performed in this state must be undone.
-* `prepare()`: Once all write operations are executed, the `prepare()` callback is executed. This callback ensures that all participants have succeeded in writing all elements. The purpose of the callback is merely to indicate to NSO that the data provider is ok, and has not yet encountered any errors.
-* `abort()`: If any of the participants die or fail to reply in the `prepare()` callback, the remaining participants all get invoked in the `abort()` callback. All data written so far in this transaction should be disposed of.
-* `commit()`: If all participants successfully replied in their respective `prepare()` callbacks, all participants get invoked in their respective `commit()` callbacks. This is the place to make all data written by the write callbacks in `WRITE` state permanent.
-* `finish()`: And finally, the `finish()` callback gets invoked at the end. This is a good place to deallocate any local resources for the transaction. The `finish()` callback can be called from several different states.
-
-The following picture illustrates the conceptual state machine an NSO transaction goes through.
-
-
NSO Transaction State Machine
-
-All callback methods are optional. If a callback method is not implemented, it is the same as having an empty callback which simply returns.
-
-Similar to how we have to register transaction callbacks, we must also register data callbacks. The transaction callbacks cover the life span of the transaction, and the data callbacks are used to read and write data inside a transaction. The data callbacks have access to what is referred to as the transaction context in the form of a `DpTrans` object.
-
-We have the following data callbacks:
-
-* `getElem()`: This callback is invoked by NSO when NSO needs to read the actual value of a leaf element. We must also implement the `getElem()` callback for the keys. NSO invokes `getElem()` on a key as an existence test.\\
-
- We define the `getElem` callback inside a class as:\\
-
- ```java
- public static class DataCb {
-
- @DataCallback(callPoint="foo", callType=DataCBType.GET_ELEM)
- public ConfValue getElem(DpTrans trans, ConfObject[] kp)
- throws DpCallbackException {
- .....
- ```
-* `existsOptional()`: This callback is called for all type less and optional elements, i.e. `presence` containers and leafs of type `empty` (unless in a union). If we have presence containers or leafs of type `empty` (unless in a union), we cannot use the `getElem()` callback to read the value of such a node, since it does not have a type. Type `empty` leafs in a union are instead read using `getElem()` callback.
-* An example of a data model could be:\\
-
- ```yang
- container bs {
- presence "";
- tailf:callpoint bcp;
- list b {
- key name;
- max-elements 64;
- leaf name {
- type string;
- }
- container opt {
- presence "";
- leaf ii {
- type int32;
- }
- }
- leaf foo {
- type empty;
- }
- }
- }
- ```
-
- The above YANG fragment has three nodes that may or may not exist and that do not have a type. If we do not have any such elements, nor any operational data lists without keys (see below), we do not need to implement the `existsOptional()` callback.
-
- \
- If we have the above data model, we must implement the `existsOptional()`, and our implementation must be prepared to reply to calls of the function for the paths `/bs`, `/bs/b/opt`, and `/bs/b/foo`. The leaf `/bs/b/opt/ii` is not mandatory, but it does have a type namely `int32`, and thus the existence of that leaf will be determined through a call to the `getElem()` callback.
-
- \
- The `existsOptional()` callback may also be invoked by NSO as an "existence test" for an entry in an operational data list without keys. Normally this existence test is done with a `getElem()` request for the first key, but since there are no keys, this callback is used instead. Thus, if we have such lists, we must also implement this callback, and handle a request where the keypath identifies a list entry.
-* `iterator()` and `getKey()`: This pair of callbacks is used when NSO wants to traverse a YANG list. The job of the `iterator()` callback is to return an `Iterator` object that is invoked by the library. For each `Object` returned by the `iterator`, the NSO library will invoke the `getKey()` callback on the returned object. The `getkey` callback shall return a `ConfKey` value.
-
- \
- An alternative to the `getKey()` callback is to register the optional `getObject()` callback whose job it is to return not just the key, but the entire YANG list entry. It is possible to register both `getKey()` and `getObject()` or either. If the `getObject()` is registered, NSO will attempt to use it only when bulk retrieval is executed.
-
-We also have two additional optional callbacks that may be implemented for efficiency reasons.
-
-* `getObject()`: If this optional callback is implemented, the work of the callback is to return an entire `object`, i.e., a list instance. This is not the same `getObject()` as the one that is used in combination with the `iterator()`
-* `numInstances()`: When NSO needs to figure out how many instances we have of a certain element, by default NSO will repeatedly invoke the `iterator()` callback. If this callback is installed, it will be called instead.
-
-The following example illustrates an external data provider. The example is possible to run from the examples collection. It resides under [examples.ncs/sdk-api/external-db](https://github.com/NSO-developer/nso-examples/tree/6.6/sdk-api/external-db).
-
-The example comes with a tailor-made database - `MyDb`. That source code is provided with the example but not shown here. However, the functionality will be obvious from the method names like `newItem()`, `lock()`, `save()`, etc.
-
-Two classes are implemented, one for the transaction callbacks and another for the data callbacks.
-
-The data model we wish to incorporate into NSO is a trivial list of work items. It looks like:
-
-{% code title="Example: work.yang" %}
-```yang
- module work {
- namespace "http://example.com/work";
- prefix w;
- import ietf-yang-types {
- prefix yang;
- }
- import tailf-common {
- prefix tailf;
- }
- description "This model is used as a simple example model
- illustrating how to have NCS configuration data
- that is stored outside of NCS - i.e not in CDB";
-
- revision 2010-04-26 {
- description "Initial revision.";
- }
-
- container work {
- tailf:callpoint workPoint;
- list item {
- key key;
- leaf key {
- type int32;
- }
- leaf title {
- type string;
- }
- leaf responsible {
- type string;
- }
- leaf comment {
- type string;
- }
- }
- }
-}
-```
-{% endcode %}
-
-Note the callpoint directive in the model, it indicates that an external Java callback must register itself using that name. That callback will be responsible for all data below the callpoint.
-
-To compile the `work.yang` data model and then also to generate Java code for the data model, we invoke `make all` in the example package src directory. The Makefile will compile the yang files in the package, generate Java code for those data models, and then also invoke ant in the Java src directory.
-
-The Data callback class looks as follows:
-
-{% code title="Example: DataCb Class" %}
-```java
- @DataCallback(callPoint=work.callpoint_workPoint,
- callType=DataCBType.ITERATOR)
- public Iterator