diff --git a/README.md b/README.md
index df1afbe6..ed83e91e 100644
--- a/README.md
+++ b/README.md
@@ -2,7 +2,24 @@
description: Get started with the Cisco Crosswork NSO documentation guides.
icon: power-off
cover: images/gb-cover-final.png
-coverY: 0
+coverY: -32.31167466986795
+layout:
+ width: default
+ cover:
+ visible: true
+ size: hero
+ title:
+ visible: true
+ description:
+ visible: true
+ tableOfContents:
+ visible: true
+ outline:
+ visible: true
+ pagination:
+ visible: true
+ metadata:
+ visible: true
---
# Start
diff --git a/administration/advanced-topics/layered-service-architecture.md b/administration/advanced-topics/layered-service-architecture.md
index 580e663b..1d808d33 100644
--- a/administration/advanced-topics/layered-service-architecture.md
+++ b/administration/advanced-topics/layered-service-architecture.md
@@ -97,9 +97,9 @@ Finally, if the two-layer approach proves to be insufficient due to requirements
### Greenfield LSA Application
-This section describes a small LSA application, which exists as a running example in the [examples.ncs/layered-services-architecture/lsa-single-version-deployment](https://github.com/NSO-developer/nso-examples/tree/6.5/layered-services-architecture/lsa-single-version-deployment) directory.
+This section describes a small LSA application, which exists as a running example in the [examples.ncs/layered-services-architecture/lsa-single-version-deployment](https://github.com/NSO-developer/nso-examples/tree/6.6/layered-services-architecture/lsa-single-version-deployment) directory.
-The application is a slight variation on the [examples.ncs/service-management/rfs-service](https://github.com/NSO-developer/nso-examples/tree/6.5/service-management/rfs-service) example where the YANG code has been split up into an upper-layer and a lower-layer implementation. The example topology (based on netsim for the managed devices, and NSO for the upper/lower layer NSO instances) looks like the following:
+The application is a slight variation on the [examples.ncs/service-management/rfs-service](https://github.com/NSO-developer/nso-examples/tree/6.6/service-management/rfs-service) example where the YANG code has been split up into an upper-layer and a lower-layer implementation. The example topology (based on netsim for the managed devices, and NSO for the upper/lower layer NSO instances) looks like the following:
Example LSA architecture
@@ -425,7 +425,7 @@ To conclude this section, the final remark here is that to design a good LSA app
### Greenfield LSA Application Designed for Easy Scaling
-In this section, we'll describe a lightly modified version of the example in the previous section. The application we describe here exists as a running example under [examples.ncs/layered-services-architecture/lsa-scaling](https://github.com/NSO-developer/nso-examples/tree/6.5/layered-services-architecture/lsa-scaling).
+In this section, we'll describe a lightly modified version of the example in the previous section. The application we describe here exists as a running example under [examples.ncs/layered-services-architecture/lsa-scaling](https://github.com/NSO-developer/nso-examples/tree/6.6/layered-services-architecture/lsa-scaling).
Sometimes it is desirable to be able to easily move devices from one lower LSA node to another. This makes it possible to easily expand or shrink the number of lower LSA nodes. Additionally, it is sometimes desirable to avoid HA pairs for replication but instead use a common store for all lower LSA devices, such as a distributed database, or a common file system.
@@ -531,7 +531,7 @@ If we do not have the luxury of designing our NSO service application from scrat
Usually, the reasons for re-architecting an existing application are performance-related.
-In the NSO example collection, two popular examples are the [examples.ncs/service-management/mpls-vpn-java](https://github.com/NSO-developer/nso-examples/tree/6.5/service-management/mpls-vpn-java) and [examples.ncs/service-management/mpls-vpn-python](https://github.com/NSO-developer/nso-examples/tree/6.5/service-management/mpls-vpn-python) examples. Those example contains an almost "real" VPN provisioning example whereby VPNs are provisioned in a network of CPEs, PEs, and P routers according to this picture:
+In the NSO example collection, two popular examples are the [examples.ncs/service-management/mpls-vpn-java](https://github.com/NSO-developer/nso-examples/tree/6.6/service-management/mpls-vpn-java) and [examples.ncs/service-management/mpls-vpn-python](https://github.com/NSO-developer/nso-examples/tree/6.6/service-management/mpls-vpn-python) examples. Those example contains an almost "real" VPN provisioning example whereby VPNs are provisioned in a network of CPEs, PEs, and P routers according to this picture:
VPN network
@@ -592,7 +592,7 @@ By far the easiest way to change an existing monolithic NSO application into the
In this example, the topology information is stored in a separate container `share-data` and propagated to the LSA nodes by means of service code.
-The example [examples.ncs/layered-services-architecture/mpls-vpn-lsa](https://github.com/NSO-developer/nso-examples/tree/6.5/layered-services-architecture/mpls-vpn-lsa) example does exactly this, the upper layer data model in `upper-nso/packages/l3vpn/src/yang/l3vpn.yang` now looks as:
+The example [examples.ncs/layered-services-architecture/mpls-vpn-lsa](https://github.com/NSO-developer/nso-examples/tree/6.6/layered-services-architecture/mpls-vpn-lsa) example does exactly this, the upper layer data model in `upper-nso/packages/l3vpn/src/yang/l3vpn.yang` now looks as:
```yang
list l3vpn {
@@ -765,7 +765,7 @@ Deployment of an LSA cluster where all the nodes have the same major version of
The choice between the two deployment options depends on your functional needs. The single version is easier to maintain and is a good starting point but is less flexible. While it is possible to migrate from one to the other, the migration from a single version to a multi-version is typically easier than the other way around. Still, every migration requires some effort, so it is best to pick one approach and stick to it.
-You can find working examples of both deployment types in the [examples.ncs/layered-services-architecture/lsa-single-version-deployment](https://github.com/NSO-developer/nso-examples/tree/6.5/layered-services-architecture/lsa-single-version-deployment) and [examples.ncs/layered-services-architecture/lsa-multi-version-deployment](https://github.com/NSO-developer/nso-examples/tree/6.5/layered-services-architecture/lsa-multi-version-deployment) folders, respectively.
+You can find working examples of both deployment types in the [examples.ncs/layered-services-architecture/lsa-single-version-deployment](https://github.com/NSO-developer/nso-examples/tree/6.6/layered-services-architecture/lsa-single-version-deployment) and [examples.ncs/layered-services-architecture/lsa-multi-version-deployment](https://github.com/NSO-developer/nso-examples/tree/6.6/layered-services-architecture/lsa-multi-version-deployment) folders, respectively.
### RFS Nodes Setup
@@ -912,7 +912,7 @@ Once you have both, the CFS and device-compiled RFS service packages are ready;
### Example Walkthrough
-You can see all the required setup steps for a single version deployment performed in the example [examples.ncs/layered-services-architecture/lsa-single-version-deployment](https://github.com/NSO-developer/nso-examples/tree/6.5/layered-services-architecture/lsa-single-version-deployment) and the [examples.ncs/layered-services-architecture/lsa-multi-version-deployment](https://github.com/NSO-developer/nso-examples/tree/6.5/layered-services-architecture/lsa-multi-version-deployment) has the steps for the multi-version one. The two are quite similar but the multi-version deployment has additional steps, so it is the one described here.
+You can see all the required setup steps for a single version deployment performed in the example [examples.ncs/layered-services-architecture/lsa-single-version-deployment](https://github.com/NSO-developer/nso-examples/tree/6.6/layered-services-architecture/lsa-single-version-deployment) and the [examples.ncs/layered-services-architecture/lsa-multi-version-deployment](https://github.com/NSO-developer/nso-examples/tree/6.6/layered-services-architecture/lsa-multi-version-deployment) has the steps for the multi-version one. The two are quite similar but the multi-version deployment has additional steps, so it is the one described here.
First, build the example for manual setup.
@@ -1172,7 +1172,7 @@ Likewise, you can return to the Single-Version Deployment, by upgrading the RFS
All these `ned-id` changes stem from the fact that the upper-layer CFS node treats the lower-layer RFS node as a managed device, requiring the correct model, just like it does for any other device type. For the same reason, maintenance (bug fix or patch) NSO upgrades do not result in a changed `ned-id`, so for those, no migration is necessary.
-The [NSO example set](https://github.com/NSO-developer/nso-examples/tree/6.5/layered-services-architecture) illustrates different aspects of LSA deployment including working with single- and multi-version deployments.
+The [NSO example set](https://github.com/NSO-developer/nso-examples/tree/6.6/layered-services-architecture) illustrates different aspects of LSA deployment including working with single- and multi-version deployments.
### User Authorization Passthrough
diff --git a/administration/installation-and-deployment/containerized-nso.md b/administration/installation-and-deployment/containerized-nso.md
index f77ac43a..da4045f6 100644
--- a/administration/installation-and-deployment/containerized-nso.md
+++ b/administration/installation-and-deployment/containerized-nso.md
@@ -48,7 +48,7 @@ Consult the [Installation](./) documentation for information on installing NSO o
{% hint style="info" %}
See [Developing and Deploying a Nano Service](deployment/develop-and-deploy-a-nano-service.md) for an example that uses the container to deploy an SSH-key-provisioning nano service.
-The README in the [examples.ncs/getting-started/netsim-sshkey](https://github.com/NSO-developer/nso-examples/tree/6.5/getting-started/netsim-sshkey) example provides a link to the container-based deployment variant of the example. See the `setup_ncip.sh` script and `README` in the `netsim-sshkey` deployment example for details.
+The README in the [examples.ncs/getting-started/netsim-sshkey](https://github.com/NSO-developer/nso-examples/tree/6.6/getting-started/netsim-sshkey) example provides a link to the container-based deployment variant of the example. See the `setup_ncip.sh` script and `README` in the `netsim-sshkey` deployment example for details.
{% endhint %}
### Build Image
@@ -195,7 +195,7 @@ If you need to perform operations before or after the `ncs` process is started i
NSO is installed with the `--run-as-user` option for build and production containers to run NSO from the non-root `nso` user that belongs to the `nso` user group.
-When migrating from container versions where NSO has `root` privilege, ensure the `nso` user owns or has access rights to the required files and directories. Examples include application directories, SSH host keys, SSH keys used to authenticate with devices, etc. See the deployment example variant referenced by the [examples.ncs/getting-started/netsim-sshkey/README.md](https://github.com/NSO-developer/nso-examples/tree/6.5/getting-started/netsim-sshkey) for an example.
+When migrating from container versions where NSO has `root` privilege, ensure the `nso` user owns or has access rights to the required files and directories. Examples include application directories, SSH host keys, SSH keys used to authenticate with devices, etc. See the deployment example variant referenced by the [examples.ncs/getting-started/netsim-sshkey/README.md](https://github.com/NSO-developer/nso-examples/tree/6.6/getting-started/netsim-sshkey) for an example.
The NSO container runs a script called `take-ownership.sh` as part of its startup, which takes ownership of all the directories that NSO needs. The script will be one of the first things to run. The script can be overridden to take ownership of even more directories, such as mounted volumes or bind mounts.
@@ -625,7 +625,7 @@ This example covers the necessary information to manifest the use of NSO images
#### **Packages**
-The packages used in this example are taken from the [examples.ncs/getting-started/netsim-sshkey](https://github.com/NSO-developer/nso-examples/tree/6.5/getting-started/netsim-sshkey) example:
+The packages used in this example are taken from the [examples.ncs/getting-started/netsim-sshkey](https://github.com/NSO-developer/nso-examples/tree/6.6/getting-started/netsim-sshkey) example:
* `distkey`: A simple Python + template service package that automates the setup of SSH public key authentication between netsim (ConfD) devices and NSO using a nano service.
* `ne`: A NETCONF NED package representing a netsim network element that implements a configuration subscriber Python application that adds or removes the configured public key, which the netsim (ConfD) network element checks when authenticating public key authentication clients.
diff --git a/administration/installation-and-deployment/deployment/deployment-example.md b/administration/installation-and-deployment/deployment/deployment-example.md
index 7d927aa5..a39caf89 100644
--- a/administration/installation-and-deployment/deployment/deployment-example.md
+++ b/administration/installation-and-deployment/deployment/deployment-example.md
@@ -4,7 +4,7 @@ description: Understand NSO deployment with an example setup.
# Deployment Example
-This section shows examples of a typical deployment for a highly available (HA) setup. A reference to an example implementation of the `tailf-hcc` layer-2 upgrade deployment scenario described here, check the NSO example set under [examples.ncs/high-availability/hcc](https://github.com/NSO-developer/nso-examples/tree/6.5/high-availability/hcc). The example covers the following topics:
+This section shows examples of a typical deployment for a highly available (HA) setup. A reference to an example implementation of the `tailf-hcc` layer-2 upgrade deployment scenario described here, check the NSO example set under [examples.ncs/high-availability/hcc](https://github.com/NSO-developer/nso-examples/tree/6.6/high-availability/hcc). The example covers the following topics:
* Installation of NSO on all nodes in an HA setup
* Initial configuration of NSO on all nodes
@@ -175,9 +175,9 @@ The NSO HA, together with the `tailf-hcc` package, provides three features:
* If the leader/primary fails, a follower/secondary takes over and starts to act as leader/primary. This is how HA Raft works and how the rule-based HA variant of this example is configured to handle failover automatically.
* At failover, `tailf-hcc` sets up a virtual alias IP address on the leader/primary node only and uses gratuitous ARP packets to update all nodes in the network with the new mapping to the leader/primary node.
-Nodes in other networks can be updated using the `tailf-hcc` layer-3 BGP functionality or a load balancer. See the `load-balancer`and `hcc`examples in the NSO example set under [examples.ncs/high-availability](https://github.com/NSO-developer/nso-examples/tree/6.5/high-availability).
+Nodes in other networks can be updated using the `tailf-hcc` layer-3 BGP functionality or a load balancer. See the `load-balancer`and `hcc`examples in the NSO example set under [examples.ncs/high-availability](https://github.com/NSO-developer/nso-examples/tree/6.6/high-availability).
-See the NSO example set under [examples.ncs/high-availability/hcc](https://github.com/NSO-developer/nso-examples/tree/6.5/high-availability/hcc) for a reference to an HA Raft and rule-based HA `tailf-hcc` Layer 3 BGP examples.
+See the NSO example set under [examples.ncs/high-availability/hcc](https://github.com/NSO-developer/nso-examples/tree/6.6/high-availability/hcc) for a reference to an HA Raft and rule-based HA `tailf-hcc` Layer 3 BGP examples.
The HA Raft and rule-based HA upgrade-l2 examples also demonstrate HA failover, upgrading the NSO version on all nodes, and upgrading NSO packages on all nodes.
@@ -211,7 +211,7 @@ The NSO system installations performed on the nodes in the HA cluster also insta
### Syslog
-For the HA Raft and rule-based HA upgrade-l2 examples, see the reference from the `README` in the [examples.ncs/high-availability/hcc](https://github.com/NSO-developer/nso-examples/tree/6.5/high-availability/hcc) example directory; the examples integrate with `rsyslog` to log the `ncs`, `developer`, `upgrade`, `audit`, `netconf`, `snmp`, and `webui-access` logs to syslog with `facility` set to `daemon` in `ncs.conf`.
+For the HA Raft and rule-based HA upgrade-l2 examples, see the reference from the `README` in the [examples.ncs/high-availability/hcc](https://github.com/NSO-developer/nso-examples/tree/6.6/high-availability/hcc) example directory; the examples integrate with `rsyslog` to log the `ncs`, `developer`, `upgrade`, `audit`, `netconf`, `snmp`, and `webui-access` logs to syslog with `facility` set to `daemon` in `ncs.conf`.
`rsyslogd` on the nodes in the HA cluster is configured to write the daemon facility logs to `/var/log/daemon.log`, and forward the daemon facility logs with the severity `info` or higher to the manager node's `/var/log/ha-cluster.log` syslog.
@@ -345,4 +345,4 @@ $ cat /etc/ncs/ipc_access
.......
```
-For an HA setup, HA Raft is based on the Raft consensus algorithm and provides the best fault tolerance, performance, and security. It is therefore recommended over the legacy rule-based HA variant. The `raft-upgrade-l2` project, referenced from the NSO example set under [examples.ncs/high-availability/hcc](https://github.com/NSO-developer/nso-examples/tree/6.4/high-availability/hcc), together with this Deployment Example section, describes a reference implementation. See [NSO HA Raft](../../management/high-availability.md#ug.ha.raft) for more HA Raft details.
+For an HA setup, HA Raft is based on the Raft consensus algorithm and provides the best fault tolerance, performance, and security. It is therefore recommended over the legacy rule-based HA variant. The `raft-upgrade-l2` project, referenced from the NSO example set under [examples.ncs/high-availability/hcc](https://github.com/NSO-developer/nso-examples/tree/6.6/high-availability/hcc), together with this Deployment Example section, describes a reference implementation. See [NSO HA Raft](../../management/high-availability.md#ug.ha.raft) for more HA Raft details.
diff --git a/administration/installation-and-deployment/deployment/develop-and-deploy-a-nano-service.md b/administration/installation-and-deployment/deployment/develop-and-deploy-a-nano-service.md
index 27b9ae3d..8157518a 100644
--- a/administration/installation-and-deployment/deployment/develop-and-deploy-a-nano-service.md
+++ b/administration/installation-and-deployment/deployment/develop-and-deploy-a-nano-service.md
@@ -4,7 +4,7 @@ description: Develop and deploy a nano service using a guided example.
# Develop and Deploy a Nano Service
-This section shows how to develop and deploy a simple NSO nano service for managing the provisioning of SSH public keys for authentication. For more details on nano services, see [Nano Services for Staged Provisioning](../../../development/core-concepts/nano-services.md) in Development. The example showcasing development is available under [examples.ncs/getting-started/netsim-sshkey](https://github.com/NSO-developer/nso-examples/tree/6.5/getting-started/netsim-sshkey). In addition, there is a reference from the `README` in the example's directory to the deployment version of the example.
+This section shows how to develop and deploy a simple NSO nano service for managing the provisioning of SSH public keys for authentication. For more details on nano services, see [Nano Services for Staged Provisioning](../../../development/core-concepts/nano-services.md) in Development. The example showcasing development is available under [examples.ncs/getting-started/netsim-sshkey](https://github.com/NSO-developer/nso-examples/tree/6.6/getting-started/netsim-sshkey). In addition, there is a reference from the `README` in the example's directory to the deployment version of the example.
## Development
@@ -424,4 +424,4 @@ Two scripts showcase the nano service:
As with the development version, both scripts will demo the service by generating keys, distributing the public key, and configuring NSO for public key authentication with the network elements.
-To run the example and for more details, see the instructions in the `README` file of the [examples.ncs/getting-started/netsim-sshkey](https://github.com/NSO-developer/nso-examples/tree/6.5/getting-started/netsim-sshkey) deployment example.
+To run the example and for more details, see the instructions in the `README` file of the [examples.ncs/getting-started/netsim-sshkey](https://github.com/NSO-developer/nso-examples/tree/6.6/getting-started/netsim-sshkey) deployment example.
diff --git a/administration/installation-and-deployment/deployment/secure-deployment.md b/administration/installation-and-deployment/deployment/secure-deployment.md
index 814410d6..04ba84fe 100644
--- a/administration/installation-and-deployment/deployment/secure-deployment.md
+++ b/administration/installation-and-deployment/deployment/secure-deployment.md
@@ -63,7 +63,7 @@ Running NSO with minimal privileges is a fundamental security best practice:
1. `# chown root cmdwrapper`
2. `# chmod u+s cmdwrapper`
-* The deployment variant referenced in the README file of the [examples.ncs/getting-started/netsim-sshkey](https://github.com/NSO-developer/nso-examples/tree/6.5/getting-started/netsim-sshkey) example provides a native and NSO production container based example.
+* The deployment variant referenced in the README file of the [examples.ncs/getting-started/netsim-sshkey](https://github.com/NSO-developer/nso-examples/tree/6.6/getting-started/netsim-sshkey) example provides a native and NSO production container based example.
## Authentication, Authorization, and Accounting (AAA)
@@ -131,7 +131,7 @@ See [Authenticating IPC Access](../../management/aaa-infrastructure.md#authentic
Secure communication with managed devices:
* Use [Cisco-provided NEDs](../../management/ned-administration.md) when possible.
-* Refer to the [examples.ncs/getting-started/netsim-sshkey](https://github.com/NSO-developer/nso-examples/tree/6.5/getting-started/netsim-sshkey) README, which references a deployment variant of the example for SSH key update patterns using nano services.
+* Refer to the [examples.ncs/getting-started/netsim-sshkey](https://github.com/NSO-developer/nso-examples/tree/6.6/getting-started/netsim-sshkey) README, which references a deployment variant of the example for SSH key update patterns using nano services.
## Cryptographic Key Management
diff --git a/administration/installation-and-deployment/post-install-actions/explore-the-installation.md b/administration/installation-and-deployment/post-install-actions/explore-the-installation.md
index 2125d214..11adb134 100644
--- a/administration/installation-and-deployment/post-install-actions/explore-the-installation.md
+++ b/administration/installation-and-deployment/post-install-actions/explore-the-installation.md
@@ -41,7 +41,7 @@ Run `index.html` in your browser to explore further.
### Examples
-Local Install comes with a rich set of [examples](https://github.com/NSO-developer/nso-examples/tree/6.5) to start using NSO.
+Local Install comes with a rich set of [examples](https://github.com/NSO-developer/nso-examples/tree/6.6) to start using NSO.
```bash
$ ls -1 examples.ncs/
@@ -81,7 +81,7 @@ juniper-junos-nc-3.0
```
{% hint style="info" %}
-The example NEDs included in the installer are intended for evaluation, demonstration, and use with the [examples.ncs](https://github.com/NSO-developer/nso-examples/tree/6.5) examples. These are not the latest versions available and often do not have all the features available in production NEDs.
+The example NEDs included in the installer are intended for evaluation, demonstration, and use with the [examples.ncs](https://github.com/NSO-developer/nso-examples/tree/6.6) examples. These are not the latest versions available and often do not have all the features available in production NEDs.
{% endhint %}
#### **Install New NEDs**
diff --git a/administration/installation-and-deployment/post-install-actions/modify-examples-for-system-install.md b/administration/installation-and-deployment/post-install-actions/modify-examples-for-system-install.md
index 58bf7acd..46efe5da 100644
--- a/administration/installation-and-deployment/post-install-actions/modify-examples-for-system-install.md
+++ b/administration/installation-and-deployment/post-install-actions/modify-examples-for-system-install.md
@@ -12,7 +12,7 @@ Since all the NSO examples and README steps that come with the installer are pri
To work with the System Install structure, this may require a little or bigger modification depending on the example.
-For example, to port the [example.ncs/nano-services/basic-vrouter](https://github.com/NSO-developer/nso-examples/tree/6.5/nano-services/basic-vrouter) example to the System Install structure:
+For example, to port the [example.ncs/nano-services/basic-vrouter](https://github.com/NSO-developer/nso-examples/tree/6.6/nano-services/basic-vrouter) example to the System Install structure:
1. Make the following changes to the `basic-vrouter/ncs.conf` file:
diff --git a/administration/installation-and-deployment/post-install-actions/running-nso-examples.md b/administration/installation-and-deployment/post-install-actions/running-nso-examples.md
index 1e61d070..ee16338c 100644
--- a/administration/installation-and-deployment/post-install-actions/running-nso-examples.md
+++ b/administration/installation-and-deployment/post-install-actions/running-nso-examples.md
@@ -11,7 +11,7 @@ Applies to Local Install.
This section provides an overview of how to run the examples provided with the NSO installer. By working through the examples, the reader should get a good overview of the various aspects of NSO and hands-on experience from interacting with it.
{% hint style="info" %}
-This section references the examples located in [$NCS\_DIR/examples.ncs](https://github.com/NSO-developer/nso-examples/tree/6.5). The examples all have `README` files that include instructions related to the example.
+This section references the examples located in [$NCS\_DIR/examples.ncs](https://github.com/NSO-developer/nso-examples/tree/6.6). The examples all have `README` files that include instructions related to the example.
{% endhint %}
## General Instructions
diff --git a/administration/installation-and-deployment/upgrade-nso.md b/administration/installation-and-deployment/upgrade-nso.md
index 90c47f36..13b05ed4 100644
--- a/administration/installation-and-deployment/upgrade-nso.md
+++ b/administration/installation-and-deployment/upgrade-nso.md
@@ -32,7 +32,7 @@ In case it turns out that any of the packages are incompatible or cannot be reco
Additional preparation steps may be required based on the upgrade and the actual setup, such as when using the Layered Service Architecture (LSA) feature. In particular, for a major NSO upgrade in a multi-version LSA cluster, ensure that the new version supports the other cluster members and follow the additional steps outlined in [Deploying LSA](../advanced-topics/layered-service-architecture.md#deploying-lsa) in Layered Service Architecture.
-If you use the High Availability (HA) feature, the upgrade consists of multiple steps on different nodes. To avoid mistakes, you are encouraged to script the process, for which you will need to set up and verify access to all NSO instances with either `ssh`, `nct`, or some other remote management command. For the reference example, we use in this chapter, see [examples.ncs/high-availability/hcc](https://github.com/NSO-developer/nso-examples/tree/6.5/high-availability/hcc). The management station uses shell and Python scripts that use `ssh` to access the Linux shell and NSO CLI and Python Requests for NSO RESTCONF interface access.
+If you use the High Availability (HA) feature, the upgrade consists of multiple steps on different nodes. To avoid mistakes, you are encouraged to script the process, for which you will need to set up and verify access to all NSO instances with either `ssh`, `nct`, or some other remote management command. For the reference example, we use in this chapter, see [examples.ncs/high-availability/hcc](https://github.com/NSO-developer/nso-examples/tree/6.6/high-availability/hcc). The management station uses shell and Python scripts that use `ssh` to access the Linux shell and NSO CLI and Python Requests for NSO RESTCONF interface access.
Likewise, NSO 5.3 added support for 256-bit AES encrypted strings, requiring the AES256CFB128 key in the `ncs.conf` configuration. You can generate one with the `openssl rand -hex 32` or a similar command. Alternatively, if you use an external command to provide keys, ensure that it includes a value for an `AES256CFB128_KEY` in the output.
@@ -418,9 +418,9 @@ To further reduce time spent upgrading, you can customize the script to install
You can use the same script for a maintenance upgrade as-is, with an empty `packages-MAJORVERSION` directory, or remove the `upgrade_packages` calls from the script.
-Example implementations that use scripts to upgrade a 2- and 3-node setup using CLI/MAAPI or RESTCONF are available in the NSO example set under [examples.ncs/high-availability](https://github.com/NSO-developer/nso-examples/tree/6.5/high-availability).
+Example implementations that use scripts to upgrade a 2- and 3-node setup using CLI/MAAPI or RESTCONF are available in the NSO example set under [examples.ncs/high-availability](https://github.com/NSO-developer/nso-examples/tree/6.6/high-availability).
-We have been using a two-node HCC layer-2 upgrade reference example elsewhere in the documentation to demonstrate installing NSO and adding the initial configuration. The upgrade-l2 example referenced in [examples.ncs/high-availability/hcc](https://github.com/NSO-developer/nso-examples/tree/6.5/high-availability/hcc) implements shell and Python scripted steps to upgrade the NSO version using `ssh` to the Linux shell and the NSO CLI or Python Requests RESTCONF for accessing the `paris` and `london` nodes. See the example for details.
+We have been using a two-node HCC layer-2 upgrade reference example elsewhere in the documentation to demonstrate installing NSO and adding the initial configuration. The upgrade-l2 example referenced in [examples.ncs/high-availability/hcc](https://github.com/NSO-developer/nso-examples/tree/6.6/high-availability/hcc) implements shell and Python scripted steps to upgrade the NSO version using `ssh` to the Linux shell and the NSO CLI or Python Requests RESTCONF for accessing the `paris` and `london` nodes. See the example for details.
If you do not wish to automate the upgrade process, you will need to follow the instructions from [Single Instance Upgrade](upgrade-nso.md#ug.admin_guide.manual_upgrade) and transfer the required files to each host manually. Additional information on HA is available in [High Availability](../management/high-availability.md). However, you can run the `high-availability` actions from the preceding script on the NSO CLI as-is. In this case, please take special care of which host you perform each command, as it can be easy to mix them up.
@@ -488,9 +488,9 @@ The `packages ha sync and-reload` command has the following known limitations an
* The `primary` node is set to `read-only` mode before the upgrade starts, and it is set back to its previous mode if the upgrade is successfully upgraded. However, the node will always be in read-write mode if an error occurs during the upgrade. It is up to the user to set the node back to the desired mode by using the `high-availability read-only mode` command.
* As a best practice, you should create a backup of all nodes before upgrading. This action creates no backups, you must do that explicitly.
-Example implementations that use scripts to upgrade a 2- and 3-node setup using CLI/MAAPI or RESTCONF are available in the NSO example set under [examples.ncs/high-availability](https://github.com/NSO-developer/nso-examples/tree/6.5/high-availabilit).
+Example implementations that use scripts to upgrade a 2- and 3-node setup using CLI/MAAPI or RESTCONF are available in the NSO example set under [examples.ncs/high-availability](https://github.com/NSO-developer/nso-examples/tree/6.6/high-availabilit).
-We have been using a two-node HCC layer 2 upgrade reference example elsewhere in the documentation to demonstrate installing NSO and adding the initial configuration. The `upgrade-l2` example referenced in [examples.ncs/high-availability/hcc](https://github.com/NSO-developer/nso-examples/tree/6.5/high-availability/hcc) implements shell and Python scripted steps to upgrade the `primary` `paris` package versions and sync the packages to the `secondary` `london` using `ssh` to the Linux shell and the NSO CLI or Python Requests RESTCONF for accessing the `paris` and `london` nodes. See the example for details.
+We have been using a two-node HCC layer 2 upgrade reference example elsewhere in the documentation to demonstrate installing NSO and adding the initial configuration. The `upgrade-l2` example referenced in [examples.ncs/high-availability/hcc](https://github.com/NSO-developer/nso-examples/tree/6.6/high-availability/hcc) implements shell and Python scripted steps to upgrade the `primary` `paris` package versions and sync the packages to the `secondary` `london` using `ssh` to the Linux shell and the NSO CLI or Python Requests RESTCONF for accessing the `paris` and `london` nodes. See the example for details.
In some cases, NSO may warn when the upgrade looks suspicious. For more information on this, see [Loading Packages](../management/package-mgmt.md#ug.package_mgmt.loading). If you understand the implications and are willing to risk losing data, use the `force` option with `packages reload` or set the `NCS_RELOAD_PACKAGES` environment variable to `force` when restarting NSO. It will force NSO to ignore warnings and proceed with the upgrade. In general, this is not recommended.
diff --git a/administration/management/aaa-infrastructure.md b/administration/management/aaa-infrastructure.md
index 729482f0..fe2c53ba 100644
--- a/administration/management/aaa-infrastructure.md
+++ b/administration/management/aaa-infrastructure.md
@@ -609,7 +609,7 @@ NSO will skip this access check in case the euid of the connecting process is 0
If using Unix socket IPC, clients and client libraries must now specify the path that identifies the socket. The path must match the one set under `ncs-local-ipc/path` in `ncs.conf`. Clients may expose a client-specific way to set it, such as the `-S` option of the `ncs_cli` command. Alternatively, you can use the `NCS_IPC_PATH` environment variable to specify the socket path independently of the used client.
-See [examples.ncs/aaa/ipc](https://github.com/NSO-developer/nso-examples/tree/6.5/aaa/ipc) for a working example.
+See [examples.ncs/aaa/ipc](https://github.com/NSO-developer/nso-examples/tree/6.6/aaa/ipc) for a working example.
## Group Membership
diff --git a/administration/management/high-availability.md b/administration/management/high-availability.md
index c77d106f..ae2a3537 100644
--- a/administration/management/high-availability.md
+++ b/administration/management/high-availability.md
@@ -34,9 +34,9 @@ Compared to traditional fail-over HA solutions, Raft relies on the consensus of
Raft achieves robustness by requiring at least three nodes in the HA cluster. Three is the recommended cluster size, allowing the cluster to operate in the face of a single node failure. In case you need to tolerate two nodes failing simultaneously, you can add two additional nodes, for a 5-node cluster. However, permanently having more than five nodes in a single cluster is currently not recommended since Raft requires the majority of the currently configured nodes in the cluster to reach consensus. Without the consensus, the cluster cannot function.
-You can start a sample HA Raft cluster using the [examples.ncs/high-availability/raft-cluster](https://github.com/NSO-developer/nso-examples/tree/6.5/high-availability/raft-cluster) example to test it out. The scripts in the example show various aspects of cluster setup and operation, which are further described in the rest of this section.
+You can start a sample HA Raft cluster using the [examples.ncs/high-availability/raft-cluster](https://github.com/NSO-developer/nso-examples/tree/6.6/high-availability/raft-cluster) example to test it out. The scripts in the example show various aspects of cluster setup and operation, which are further described in the rest of this section.
-Optionally, examples using separate containers for each HA Raft cluster member with NSO system installations are available and referenced in the [examples.ncs/high-availability/hcc](https://github.com/NSO-developer/nso-examples/blob/6.4/high-availability/hcc) example in the NSO example set.
+Optionally, examples using separate containers for each HA Raft cluster member with NSO system installations are available and referenced in the [examples.ncs/high-availability/hcc](https://github.com/NSO-developer/nso-examples/tree/6.6/high-availability/hcc) example in the NSO example set.
### Overview of Raft Operation
@@ -72,9 +72,9 @@ In most cases, this means the `ADDRESS` must appear in the node certificate's Su
Create and use a self-signed CA to secure the NSO HA Raft cluster. A self-signed CA is the only secure option. The CA should only be used to sign the certificates of the member nodes in one NSO HA Raft cluster. It is critical for security that the CA is not used to sign any other certificates. Any certificate signed by the CA can be used to gain complete control of the NSO HA Raft cluster.
-See the [examples.ncs/high-availability/raft-cluster](https://github.com/NSO-developer/nso-examples/tree/6.5/high-availability/raft-cluster) example for one way to set up a self-signed CA and provision individual node certificates. The example uses a shell script `gen_tls_certs.sh` that invokes the `openssl` command. Consult the section [Recipe for a Self-signed CA](high-availability.md#recipe-for-a-self-signed-ca) for using it independently of the example.
+See the [examples.ncs/high-availability/raft-cluster](https://github.com/NSO-developer/nso-examples/tree/6.6/high-availability/raft-cluster) example for one way to set up a self-signed CA and provision individual node certificates. The example uses a shell script `gen_tls_certs.sh` that invokes the `openssl` command. Consult the section [Recipe for a Self-signed CA](high-availability.md#recipe-for-a-self-signed-ca) for using it independently of the example.
-Examples using separate containers for each HA Raft cluster member with NSO system installations that use a variant of the `gen_tls_certs.sh` script are available and referenced in the [examples.ncs/high-availability/hcc](https://github.com/NSO-developer/nso-examples/tree/6.5/high-availability/hcc) example in the NSO example set.
+Examples using separate containers for each HA Raft cluster member with NSO system installations that use a variant of the `gen_tls_certs.sh` script are available and referenced in the [examples.ncs/high-availability/hcc](https://github.com/NSO-developer/nso-examples/tree/6.6/high-availability/hcc) example in the NSO example set.
{% hint style="info" %}
When using an IP address instead of a DNS name for node's `ADDRESS`, you must add the IP address to the certificate's dNSName SAN field (adding it to iPAddress field only is insufficient). This is a known limitation in the current version.
@@ -110,7 +110,7 @@ The recipe makes the following assumptions:
To use this recipe:
-* First prepare a working environment on a secure host by creating a new directory and copying the `gen_tls_certs.sh` script from [examples.ncs/high-availability/raft-cluster](https://github.com/NSO-developer/nso-examples/tree/6.5/high-availability/raft-cluster) into it. Additionally, ensure that the `openssl` command, version 1.1 or later, is available and the system time is set correctly. Supposing that you have a cluster named `lower-west`, you might run:
+* First prepare a working environment on a secure host by creating a new directory and copying the `gen_tls_certs.sh` script from [examples.ncs/high-availability/raft-cluster](https://github.com/NSO-developer/nso-examples/tree/6.6/high-availability/raft-cluster) into it. Additionally, ensure that the `openssl` command, version 1.1 or later, is available and the system time is set correctly. Supposing that you have a cluster named `lower-west`, you might run:
```bash
$ mkdir raft-ca-lower-west
@@ -418,7 +418,7 @@ For the full procedure, first, ensure all cluster nodes are up and operational,
Note that while the upgrade is in progress, writes to the CDB are not allowed and will be rejected.
-For a `packages ha sync and-reload` example see the `raft-upgrade-l2` NSO system installation-based example referenced by the [examples.ncs/high-availability/hcc](https://github.com/NSO-developer/nso-examples/tree/6.5/high-availability/hcc) example in the NSO example set.
+For a `packages ha sync and-reload` example see the `raft-upgrade-l2` NSO system installation-based example referenced by the [examples.ncs/high-availability/hcc](https://github.com/NSO-developer/nso-examples/tree/6.6/high-availability/hcc) example in the NSO example set.
For more details, troubleshooting, and general upgrade recommendations, see [NSO Packages](package-mgmt.md) and [Upgrade](../installation-and-deployment/upgrade-nso.md).
@@ -446,7 +446,7 @@ The procedure differentiates between the current leader node versus followers. T
For a standard System Install, the single-node procedure is described in [Single Instance Upgrade](../installation-and-deployment/upgrade-nso.md#ug.admin_guide.manual_upgrade), but in general depends on the NSO deployment type. For example, it will be different for containerized environments. For specifics, please refer to the documentation for the deployment type.
-For an example see the `raft-upgrade-l2` NSO system installation-based example referenced by the [examples.ncs/high-availability/hcc](https://github.com/NSO-developer/nso-examples/tree/6.5/high-availability/hcc) example in the NSO example set.
+For an example see the `raft-upgrade-l2` NSO system installation-based example referenced by the [examples.ncs/high-availability/hcc](https://github.com/NSO-developer/nso-examples/tree/6.6/high-availability/hcc) example in the NSO example set.
If the upgrade fails before or during the upgrade of the original leader, start up the original followers to restore service and then restore the original leader, using backup as necessary.
@@ -507,7 +507,7 @@ In an NSO System Install setup, not only does the shared token need to match bet
The token configured on the secondary node is overwritten with the encrypted token of type `aes-256-cfb-128-encrypted-string` from the primary node when the secondary node connects to the primary. If there is a mismatch between the encrypted-string configuration on the nodes, NSO will not decrypt the HA token to match the token presented. As a result, the primary node denies the secondary node access the next time the HA connection needs to reestablish with a "Token mismatch, secondary is not allowed" error.
-See the `upgrade-l2` example, referenced from [examples.ncs/high-availability/hcc](https://github.com/NSO-developer/nso-examples/tree/6.5/high-availability/hcc), for an example setup and the [Deployment Example](../installation-and-deployment/deployment/deployment-example.md) for a description of the example.
+See the `upgrade-l2` example, referenced from [examples.ncs/high-availability/hcc](https://github.com/NSO-developer/nso-examples/tree/6.6/high-availability/hcc), for an example setup and the [Deployment Example](../installation-and-deployment/deployment/deployment-example.md) for a description of the example.
Also, note that the `ncs.crypto_keys` file is highly sensitive. The file contains the encryption keys for all CDB data that is encrypted on disk. Besides the HA token, this often includes passwords for various entities, such as login credentials to managed devices.
@@ -684,7 +684,7 @@ HCC 5.x or later automatically associates VIP addresses with Linux network inter
Since version 5.0, HCC relies on the NSO built-in HA for cluster management and only performs address or route management in reaction to cluster changes. Therefore, no special measures are necessary if using HCC when performing an NSO version upgrade or a package upgrade. Instead, you should follow the standard best practice HA upgrade procedure from [NSO HA Version Upgrade](../installation-and-deployment/upgrade-nso.md#ch_upgrade.ha).
-A reference to upgrade examples can be found in the README under [examples.ncs/high-availability/hcc](https://github.com/NSO-developer/nso-examples/tree/6.5/high-availability/hcc).
+A reference to upgrade examples can be found in the README under [examples.ncs/high-availability/hcc](https://github.com/NSO-developer/nso-examples/tree/6.6/high-availability/hcc).
### Layer-2
@@ -854,7 +854,7 @@ This section describes basic deployment scenarios for HCC. Layer-2 mode is demon
* [Enabling Layer-3 BGP](high-availability.md#enabling-layer-3-bgp)
* [Enabling Layer-3 DNS](high-availability.md#enabling-layer-3-dns)
-A reference to container-based examples for the layer-2 and layer-3 deployment scenarios described here can be found in the NSO example set under [examples.ncs/high-availability/hcc](https://github.com/NSO-developer/nso-examples/tree/6.5/high-availability/hcc).
+A reference to container-based examples for the layer-2 and layer-3 deployment scenarios described here can be found in the NSO example set under [examples.ncs/high-availability/hcc](https://github.com/NSO-developer/nso-examples/tree/6.6/high-availability/hcc).
Both scenarios consist of two test nodes: `london` and `paris` with a single IPv4 VIP address. For the layer-2 scenario, the nodes are on the same network. The layer-3 scenario also involves a BGP-enabled `router` node as the `london` and `paris` nodes are on two different networks.
@@ -916,7 +916,7 @@ root@london:~# ip address list
Layer-2 Example Implementation:
-A reference to a container-based example of the layer-2 scenario can be found in the NSO example set. See the [examples.ncs/high-availability/hcc](https://github.com/NSO-developer/nso-examples/tree/6.5/high-availability/hcc) `README`.
+A reference to a container-based example of the layer-2 scenario can be found in the NSO example set. See the [examples.ncs/high-availability/hcc](https://github.com/NSO-developer/nso-examples/tree/6.6/high-availability/hcc) `README`.
#### **Enabling Layer-3 BGP**
@@ -986,7 +986,7 @@ The VIP subnet is routed to the `paris` host, which is the primary node.
Layer-3 BGP Example Implementation:
-A reference to a container-based example of the combined layer-2 and layer-3 BGP scenario can be found in the NSO example set. See the [examples.ncs/high-availability/hcc](https://github.com/NSO-developer/nso-examples/tree/6.5/high-availability/hcc) `README`.
+A reference to a container-based example of the combined layer-2 and layer-3 BGP scenario can be found in the NSO example set. See the [examples.ncs/high-availability/hcc](https://github.com/NSO-developer/nso-examples/tree/6.6/high-availability/hcc) `README`.
#### **Enabling Layer-3 DNS**
@@ -1043,7 +1043,7 @@ As an alternative to the HCC package, NSO built-in HA, either rule-based or HA R
Load Balancer Routes Connections to the Appropriate NSO Node
-The load balancer uses HTTP health checks to determine which node is currently the active primary. The example, found in the [examples.ncs/high-availability/load-balancer](https://github.com/NSO-developer/nso-examples/tree/6.5/high-availability/load-balancer) directory uses HTTP status codes on the health check endpoint to easily distinguish whether the node is currently primary or not.
+The load balancer uses HTTP health checks to determine which node is currently the active primary. The example, found in the [examples.ncs/high-availability/load-balancer](https://github.com/NSO-developer/nso-examples/tree/6.6/high-availability/load-balancer) directory uses HTTP status codes on the health check endpoint to easily distinguish whether the node is currently primary or not.
In the example, freely available HAProxy software is used as a load balancer to demonstrate the functionality. It is configured to steer connections on localhost to either of the TCP port 2024 (SSH CLI) and TCP port 8080 (web UI and RESTCONF) to the active node in a 2-node HA cluster. The HAProxy software is required if you wish to run this example yourself.
diff --git a/administration/management/ned-administration.md b/administration/management/ned-administration.md
index a9f558cf..a7eeeaba 100644
--- a/administration/management/ned-administration.md
+++ b/administration/management/ned-administration.md
@@ -416,7 +416,7 @@ If applying the steps for this example on a production system, you should first
### Prepare the Example
-This guide uses the MPLS VPN example in Python from the NSO example set under [examples.ncs/service-management/mpls-vpn-python](https://github.com/NSO-developer/nso-examples/tree/6.5/service-management/mpls-vpn-python) to demonstrate porting an existing application to use the `juniper-junos_nc` NED. The simulated Junos device is replaced with a Junos vMX 21.1R1.11 container, but other NETCONF/YANG-compliant Junos versions also work.
+This guide uses the MPLS VPN example in Python from the NSO example set under [examples.ncs/service-management/mpls-vpn-python](https://github.com/NSO-developer/nso-examples/tree/6.6/service-management/mpls-vpn-python) to demonstrate porting an existing application to use the `juniper-junos_nc` NED. The simulated Junos device is replaced with a Junos vMX 21.1R1.11 container, but other NETCONF/YANG-compliant Junos versions also work.
### **Add the `juniper-junos` and `juniper-junos_nc` NED Packages**
@@ -958,6 +958,6 @@ However, there is a major downside to this approach. While the exact revision is
If you still wish to use this functionality, you can create a NED package with the `ncs-make-package --netconf-ned` command as you would otherwise. However, the supplied source YANG directory should contain YANG modules with different revisions. The files should follow the _`module-or-submodule-name`_`@`_`revision-date`_`.yang` naming convention, as specified in the RFC6020. Some versions of the compiler require you to use the `--no-fail-on-warnings` option with the `ncs-make-package` command or the build process may fail.
-The [examples.ncs/device-management/ned-yang-revision](https://github.com/NSO-developer/nso-examples/tree/6.5/device-management/ned-yang-revision) example shows how you can perform a YANG model upgrade. The original, 1.0 version of the router NED uses the `router@2020-02-27.yang` YANG model. First, it is updated to the version 1.0.1 `router@2020-09-18.yang` using a revision merge approach. This is possible because the changes are backward-compatible.
+The [examples.ncs/device-management/ned-yang-revision](https://github.com/NSO-developer/nso-examples/tree/6.6/device-management/ned-yang-revision) example shows how you can perform a YANG model upgrade. The original, 1.0 version of the router NED uses the `router@2020-02-27.yang` YANG model. First, it is updated to the version 1.0.1 `router@2020-09-18.yang` using a revision merge approach. This is possible because the changes are backward-compatible.
In the second part of the example, the updates in `router@2022-01-25.yang` introduce breaking changes, therefore the version is increased to 1.1 and a different NED-ID is assigned to the NED. In this case, you can't use revision merge and the usual NED migration procedure is required.
diff --git a/administration/management/package-mgmt.md b/administration/management/package-mgmt.md
index 4510b55a..cf4fbbff 100644
--- a/administration/management/package-mgmt.md
+++ b/administration/management/package-mgmt.md
@@ -150,7 +150,7 @@ show-tag interface
So the above command shows that NSO currently has one package, the NED for Cisco IOS.
-NSO reads global configuration parameters from `ncs.conf`. More on NSO configuration later in this guide. By default, it tells NSO to look for packages in a `packages` directory where NSO was started. Using the [examples.ncs/device-management/simulated-cisco-ios](https://github.com/NSO-developer/nso-examples/blob/6.4/device-management/simulated-cisco-ios) example to demonstrate:
+NSO reads global configuration parameters from `ncs.conf`. More on NSO configuration later in this guide. By default, it tells NSO to look for packages in a `packages` directory where NSO was started. Using the [examples.ncs/device-management/simulated-cisco-ios](https://github.com/NSO-developer/nso-examples/tree/6.6/device-management/simulated-cisco-ios) example to demonstrate:
```bash
$ pwd
diff --git a/administration/management/system-management/README.md b/administration/management/system-management/README.md
index 2a52943a..20a2e6fa 100644
--- a/administration/management/system-management/README.md
+++ b/administration/management/system-management/README.md
@@ -330,11 +330,11 @@ NSO logs in `/logs` in your running directory, (depends on your settings in `ncs
```
* Progress trace log: When a transaction or action is applied, NSO emits specific progress events. These events can be displayed and recorded in a number of different ways, either in CLI with the pipe target `details` on a commit, or by writing it to a log file. You can read more about it in the [Progress Trace](../../../development/advanced-development/progress-trace.md).
* Transaction error log: log for collecting information on failed transactions that lead to either a CDB boot error or a runtime transaction failure. The default is `false` (disabled). More information about the log is available in the Manual Pages under [Configuration Parameters](../../../resources/man/ncs.conf.5.md#configuration-parameters) (see `logs/transaction-error-log`).
-* Upgrade log: log containing information about CDB upgrade. The log is enabled by default and not rotated (i.e., use logrotate). With the NSO example set, the following examples populate the log in the `logs/upgrade.log` file: [examples.ncs/device-management/ned-yang-revision](https://github.com/NSO-developer/nso-examples/tree/6.5/device-management/ned-yang-revision), [examples.ncs/high-availability/upgrade-basic](https://github.com/NSO-developer/nso-examples/tree/6.5/high-availability/upgrade-basic), [examples.ncs/high-availability/upgrade-cluster](https://github.com/NSO-developer/nso-examples/tree/6.5/high-availability/upgrade-cluster), and [examples.ncs/service-management/upgrade-service](https://github.com/NSO-developer/nso-examples/tree/6.5/service-management/upgrade-service). More information about the log is available in the Manual Pages under [Configuration Parameters](../../../resources/man/ncs.conf.5.md#configuration-parameters) (see `logs/upgrade-log)`.
+* Upgrade log: log containing information about CDB upgrade. The log is enabled by default and not rotated (i.e., use logrotate). With the NSO example set, the following examples populate the log in the `logs/upgrade.log` file: [examples.ncs/device-management/ned-yang-revision](https://github.com/NSO-developer/nso-examples/tree/6.6/device-management/ned-yang-revision), [examples.ncs/high-availability/upgrade-basic](https://github.com/NSO-developer/nso-examples/tree/6.6/high-availability/upgrade-basic), [examples.ncs/high-availability/upgrade-cluster](https://github.com/NSO-developer/nso-examples/tree/6.6/high-availability/upgrade-cluster), and [examples.ncs/service-management/upgrade-service](https://github.com/NSO-developer/nso-examples/tree/6.6/service-management/upgrade-service). More information about the log is available in the Manual Pages under [Configuration Parameters](../../../resources/man/ncs.conf.5.md#configuration-parameters) (see `logs/upgrade-log)`.
### Syslog
-NSO can syslog to a local Syslog. See `man ncs.conf` how to configure the Syslog settings. All Syslog messages are documented in Log Messages. The `ncs.conf` also lets you decide which of the logs should go into Syslog: `ncs.log, devel.log, netconf.log, snmp.log, audit.log, WebUI access log`. There is also a possibility to integrate with `rsyslog` to log the NCS, developer, audit, netconf, SNMP, and WebUI access logs to syslog with the facility set to daemon in `ncs.conf`. For reference, see the `upgrade-l2` example [examples.ncs/high-availability/hcc](https://github.com/NSO-developer/nso-examples/tree/6.5/high-availability/hcc) .
+NSO can syslog to a local Syslog. See `man ncs.conf` how to configure the Syslog settings. All Syslog messages are documented in Log Messages. The `ncs.conf` also lets you decide which of the logs should go into Syslog: `ncs.log, devel.log, netconf.log, snmp.log, audit.log, WebUI access log`. There is also a possibility to integrate with `rsyslog` to log the NCS, developer, audit, netconf, SNMP, and WebUI access logs to syslog with the facility set to daemon in `ncs.conf`. For reference, see the `upgrade-l2` example [examples.ncs/high-availability/hcc](https://github.com/NSO-developer/nso-examples/tree/6.6/high-availability/hcc) .
Below is an example of Syslog configuration:
@@ -367,7 +367,7 @@ NSO generates alarms for serious problems that must be remedied. Alarms are avai
The NSO alarm manager also presents a northbound SNMP view, alarms can be retrieved as an alarm table, and alarm state changes are reported as SNMP Notifications. See the "NSO Northbound" documentation on how to configure the SNMP Agent.
-This is also documented in the example [examples.ncs/northbound-interfaces/snmp-alarm](https://github.com/NSO-developer/nso-examples/tree/6.5/northbound-interfaces/snmp-alarm).
+This is also documented in the example [examples.ncs/northbound-interfaces/snmp-alarm](https://github.com/NSO-developer/nso-examples/tree/6.6/northbound-interfaces/snmp-alarm).
Alarms are described on the link below:
diff --git a/administration/management/system-management/log-messages-and-formats.md b/administration/management/system-management/log-messages-and-formats.md
index 2435a193..31b4d3a6 100644
--- a/administration/management/system-management/log-messages-and-formats.md
+++ b/administration/management/system-management/log-messages-and-formats.md
@@ -243,64 +243,64 @@
-CANDIDATE_BAD_FILE_FORMAT
+CAND_COMMIT_ROLLBACK_DONE
-CANDIDATE_BAD_FILE_FORMAT
+CAND_COMMIT_ROLLBACK_DONE
* **Severity**
- `WARNING`
+ `INFO`
* **Description**
- The candidate database file has a bad format. The candidate database is reset to the empty database.
+ Candidate commit rollback done
* **Format String**
- `"Bad format found in candidate db file ~s; resetting candidate"`
+ `"Candidate commit rollback done"`
-CANDIDATE_CORRUPT_FILE
+CAND_COMMIT_ROLLBACK_FAILURE
-CANDIDATE_CORRUPT_FILE
+CAND_COMMIT_ROLLBACK_FAILURE
* **Severity**
- `WARNING`
+ `ERR`
* **Description**
- The candidate database file is corrupt and cannot be read. The candidate database is reset to the empty database.
+ Failed to rollback candidate commit
* **Format String**
- `"Corrupt candidate db file ~s; resetting candidate"`
+ `"Failed to rollback candidate commit due to: ~s"`
-CAND_COMMIT_ROLLBACK_DONE
+CANDIDATE_BAD_FILE_FORMAT
-CAND_COMMIT_ROLLBACK_DONE
+CANDIDATE_BAD_FILE_FORMAT
* **Severity**
- `INFO`
+ `WARNING`
* **Description**
- Candidate commit rollback done
+ The candidate database file has a bad format. The candidate database is reset to the empty database.
* **Format String**
- `"Candidate commit rollback done"`
+ `"Bad format found in candidate db file ~s; resetting candidate"`
-CAND_COMMIT_ROLLBACK_FAILURE
+CANDIDATE_CORRUPT_FILE
-CAND_COMMIT_ROLLBACK_FAILURE
+CANDIDATE_CORRUPT_FILE
* **Severity**
- `ERR`
+ `WARNING`
* **Description**
- Failed to rollback candidate commit
+ The candidate database file is corrupt and cannot be read. The candidate database is reset to the empty database.
* **Format String**
- `"Failed to rollback candidate commit due to: ~s"`
+ `"Corrupt candidate db file ~s; resetting candidate"`
@@ -531,48 +531,48 @@
-CLI_CMD
+CLI_CMD_ABORTED
-CLI_CMD
+CLI_CMD_ABORTED
* **Severity**
`INFO`
* **Description**
- User executed a CLI command.
+ CLI command aborted.
* **Format String**
- `"CLI '~s'"`
+ `"CLI aborted"`
-CLI_CMD_ABORTED
+CLI_CMD_DONE
-CLI_CMD_ABORTED
+CLI_CMD_DONE
* **Severity**
`INFO`
* **Description**
- CLI command aborted.
+ CLI command finished successfully.
* **Format String**
- `"CLI aborted"`
+ `"CLI done"`
-CLI_CMD_DONE
+CLI_CMD
-CLI_CMD_DONE
+CLI_CMD
* **Severity**
`INFO`
* **Description**
- CLI command finished successfully.
+ User executed a CLI command.
* **Format String**
- `"CLI done"`
+ `"CLI '~s'"`
@@ -1011,16 +1011,16 @@
-EXTAUTH_BAD_RET
+EXT_AUTH_2FA_FAIL
-EXTAUTH_BAD_RET
+EXT_AUTH_2FA_FAIL
* **Severity**
- `ERR`
+ `INFO`
* **Description**
- Authentication is external and the external program returned badly formatted data.
+ External challenge authentication failed for a user.
* **Format String**
- `"External auth program (user=~s) ret bad output: ~s"`
+ `"external challenge authentication failed via ~s from ~s with ~s: ~s"`
@@ -1043,32 +1043,32 @@
-EXT_AUTH_2FA_FAIL
+EXT_AUTH_2FA_SUCCESS
-EXT_AUTH_2FA_FAIL
+EXT_AUTH_2FA_SUCCESS
* **Severity**
`INFO`
* **Description**
- External challenge authentication failed for a user.
+ An external challenge authenticated user logged in.
* **Format String**
- `"external challenge authentication failed via ~s from ~s with ~s: ~s"`
+ `"external challenge authentication succeeded via ~s from ~s with ~s, member of groups: ~s~s"`
-EXT_AUTH_2FA_SUCCESS
+EXTAUTH_BAD_RET
-EXT_AUTH_2FA_SUCCESS
+EXTAUTH_BAD_RET
* **Severity**
- `INFO`
+ `ERR`
* **Description**
- An external challenge authenticated user logged in.
+ Authentication is external and the external program returned badly formatted data.
* **Format String**
- `"external challenge authentication succeeded via ~s from ~s with ~s, member of groups: ~s~s"`
+ `"External auth program (user=~s) ret bad output: ~s"`
@@ -1187,32 +1187,32 @@
-FILE_LOADING
+FILE_LOAD_ERR
-FILE_LOADING
+FILE_LOAD_ERR
* **Severity**
- `DEBUG`
+ `CRIT`
* **Description**
- System starts to load a file.
+ System tried to load a file in its load path and failed.
* **Format String**
- `"Loading file ~s"`
+ `"Failed to load file ~s: ~s"`
-FILE_LOAD_ERR
+FILE_LOADING
-FILE_LOAD_ERR
+FILE_LOADING
* **Severity**
- `CRIT`
+ `DEBUG`
* **Description**
- System tried to load a file in its load path and failed.
+ System starts to load a file.
* **Format String**
- `"Failed to load file ~s: ~s"`
+ `"Loading file ~s"`
@@ -1411,48 +1411,48 @@
-JSONRPC_REQUEST
+JSONRPC_REQUEST_ABSOLUTE_TIMEOUT
-JSONRPC_REQUEST
+JSONRPC_REQUEST_ABSOLUTE_TIMEOUT
* **Severity**
`INFO`
* **Description**
- JSON-RPC method requested.
+ JSON-RPC absolute timeout.
* **Format String**
- `"JSON-RPC: '~s' with JSON params ~s"`
+ `"Stopping session due to absolute timeout: ~s"`
-JSONRPC_REQUEST_ABSOLUTE_TIMEOUT
+JSONRPC_REQUEST_IDLE_TIMEOUT
-JSONRPC_REQUEST_ABSOLUTE_TIMEOUT
+JSONRPC_REQUEST_IDLE_TIMEOUT
* **Severity**
`INFO`
* **Description**
- JSON-RPC absolute timeout.
+ JSON-RPC idle timeout.
* **Format String**
- `"Stopping session due to absolute timeout: ~s"`
+ `"Stopping session due to idle timeout: ~s"`
-JSONRPC_REQUEST_IDLE_TIMEOUT
+JSONRPC_REQUEST
-JSONRPC_REQUEST_IDLE_TIMEOUT
+JSONRPC_REQUEST
* **Severity**
`INFO`
* **Description**
- JSON-RPC idle timeout.
+ JSON-RPC method requested.
* **Format String**
- `"Stopping session due to idle timeout: ~s"`
+ `"JSON-RPC: '~s' with JSON params ~s"`
@@ -1555,14 +1555,14 @@
-LOCAL_AUTH_FAIL
+LOCAL_AUTH_FAIL_BADPASS
-LOCAL_AUTH_FAIL
+LOCAL_AUTH_FAIL_BADPASS
* **Severity**
`INFO`
* **Description**
- Authentication for a locally configured user failed.
+ Authentication for a locally configured user failed due to providing bad password.
* **Format String**
`"local authentication failed via ~s from ~s with ~s: ~s"`
@@ -1571,14 +1571,14 @@
-LOCAL_AUTH_FAIL_BADPASS
+LOCAL_AUTH_FAIL
-LOCAL_AUTH_FAIL_BADPASS
+LOCAL_AUTH_FAIL
* **Severity**
`INFO`
* **Description**
- Authentication for a locally configured user failed due to providing bad password.
+ Authentication for a locally configured user failed.
* **Format String**
`"local authentication failed via ~s from ~s with ~s: ~s"`
@@ -1811,32 +1811,32 @@
-MISSING_NS
+MISSING_NS2
-MISSING_NS
+MISSING_NS2
* **Severity**
`CRIT`
* **Description**
While validating the consistency of the config - a required namespace was missing.
* **Format String**
- `"The namespace ~s could not be found in the loadPath."`
+ `"The namespace ~s (referenced by ~s) could not be found in the loadPath."`
-MISSING_NS2
+MISSING_NS
-MISSING_NS2
+MISSING_NS
* **Severity**
`CRIT`
* **Description**
While validating the consistency of the config - a required namespace was missing.
* **Format String**
- `"The namespace ~s (referenced by ~s) could not be found in the loadPath."`
+ `"The namespace ~s could not be found in the loadPath."`
@@ -1859,32 +1859,32 @@
-NETCONF
+NETCONF_HDR_ERR
-NETCONF
+NETCONF_HDR_ERR
* **Severity**
- `INFO`
+ `ERR`
* **Description**
- NETCONF traffic log message
+ The cleartext header indicating user and groups was badly formatted.
* **Format String**
- `"~s"`
+ `"Got bad NETCONF TCP header"`
-NETCONF_HDR_ERR
+NETCONF
-NETCONF_HDR_ERR
+NETCONF
* **Severity**
- `ERR`
+ `INFO`
* **Description**
- The cleartext header indicating user and groups was badly formatted.
+ NETCONF traffic log message
* **Format String**
- `"Got bad NETCONF TCP header"`
+ `"~s"`
@@ -1921,22 +1921,6 @@
-
-
-NOTIFICATION_REPLAY_STORE_FAILURE
-
-NOTIFICATION_REPLAY_STORE_FAILURE
-
-* **Severity**
- `CRIT`
-* **Description**
- A failure occurred in the builtin notification replay store
-* **Format String**
- `"~s"`
-
-
-
-
NO_CALLPOINT
@@ -2003,16 +1987,16 @@
-NS_LOAD_ERR
+NOTIFICATION_REPLAY_STORE_FAILURE
-NS_LOAD_ERR
+NOTIFICATION_REPLAY_STORE_FAILURE
* **Severity**
`CRIT`
* **Description**
- System tried to process a loaded namespace and failed.
+ A failure occurred in the builtin notification replay store
* **Format String**
- `"Failed to process namespace ~s: ~s"`
+ `"~s"`
@@ -2033,6 +2017,22 @@
+
+
+NS_LOAD_ERR
+
+NS_LOAD_ERR
+
+* **Severity**
+ `CRIT`
+* **Description**
+ System tried to process a loaded namespace and failed.
+* **Format String**
+ `"Failed to process namespace ~s: ~s"`
+
+
+
+
OPEN_LOGFILE
@@ -2163,64 +2163,64 @@
-RESTCONF_REQUEST
+REST_AUTH_FAIL
-RESTCONF_REQUEST
+REST_AUTH_FAIL
* **Severity**
`INFO`
* **Description**
- RESTCONF request
+ Rest authentication for a user failed.
* **Format String**
- `"RESTCONF: request with ~s: ~s"`
+ `"rest authentication failed from ~s"`
-RESTCONF_RESPONSE
+REST_AUTH_SUCCESS
-RESTCONF_RESPONSE
+REST_AUTH_SUCCESS
* **Severity**
`INFO`
* **Description**
- RESTCONF response
+ A rest authenticated user logged in.
* **Format String**
- `"RESTCONF: response with ~s: ~s duration ~s us"`
+ `"rest authentication succeeded from ~s , member of groups: ~s"`
-REST_AUTH_FAIL
+RESTCONF_REQUEST
-REST_AUTH_FAIL
+RESTCONF_REQUEST
* **Severity**
`INFO`
* **Description**
- Rest authentication for a user failed.
+ RESTCONF request
* **Format String**
- `"rest authentication failed from ~s"`
+ `"RESTCONF: request with ~s: ~s"`
-REST_AUTH_SUCCESS
+RESTCONF_RESPONSE
-REST_AUTH_SUCCESS
+RESTCONF_RESPONSE
* **Severity**
`INFO`
* **Description**
- A rest authenticated user logged in.
+ RESTCONF response
* **Format String**
- `"rest authentication succeeded from ~s , member of groups: ~s"`
+ `"RESTCONF: response with ~s: ~s duration ~s us"`
@@ -2801,22 +2801,6 @@
-
-
-WEBUI_LOG_MSG
-
-WEBUI_LOG_MSG
-
-* **Severity**
- `INFO`
-* **Description**
- WebUI access log message
-* **Format String**
- `"WebUI access log: ~s"`
-
-
-
-
WEB_ACTION
@@ -2865,6 +2849,22 @@
+
+
+WEBUI_LOG_MSG
+
+WEBUI_LOG_MSG
+
+* **Severity**
+ `INFO`
+* **Description**
+ WebUI access log message
+* **Format String**
+ `"WebUI access log: ~s"`
+
+
+
+
WRITE_STATE_FILE_FAILED
@@ -3361,6 +3361,22 @@
+
+
+NCS_SNMP_INIT_ERR
+
+NCS_SNMP_INIT_ERR
+
+* **Severity**
+ `INFO`
+* **Description**
+ Failed to locate snmp_init.xml in loadpath
+* **Format String**
+ `"Failed to locate snmp_init.xml in loadpath ~s"`
+
+
+
+
NCS_SNMPM_START
@@ -3395,16 +3411,32 @@
-NCS_SNMP_INIT_ERR
+NCS_TLS_CERT_LOAD_FR_DB_ERR
-NCS_SNMP_INIT_ERR
+NCS_TLS_CERT_LOAD_FR_DB_ERR
* **Severity**
- `INFO`
+ `CRIT`
* **Description**
- Failed to locate snmp_init.xml in loadpath
+ Failed to load SSL/TLS certificate from database.
* **Format String**
- `"Failed to locate snmp_init.xml in loadpath ~s"`
+ `"Failed to load SSL/TLS certificate from db: ~s."`
+
+
+
+
+
+
+NCS_TLS_CERT_LOAD_FR_FILE_ERR
+
+NCS_TLS_CERT_LOAD_FR_FILE_ERR
+
+* **Severity**
+ `CRIT`
+* **Description**
+ Failed to load SSL/TLS certificate from file.
+* **Format String**
+ `"Failed to load SSL/TLS certificate from file: ~s; Please check files specified at /ncs-config/webui/transport/ssl/cert-file or /ncs-config/webui/transport/ssl/ca-cert-file"`
diff --git a/developer-reference/erlang/econfd_cdb.md b/developer-reference/erlang/econfd_cdb.md
index 3003fedb..5efe2396 100644
--- a/developer-reference/erlang/econfd_cdb.md
+++ b/developer-reference/erlang/econfd_cdb.md
@@ -1004,11 +1004,11 @@ The fun can return the atom 'close' if we wish to close the socket and return fr
* ?CDB_DONE_TRANSACTION This means that CDB should not send any further notifications to any subscribers - including ourselves - related to the currently executing transaction.
* ?CDB_DONE_OPERATIONAL This should be used when a subscription notification for operational data has been read. It is the only type that should be used in this case, since the operational data does not have transactions and the notifications do not have priorities.
-Finally the arity-3 fun can, when Type == ?CDB_SUB_PREPARE, return an error either as \{error, binary()\} or as \{error, #confd_error\{\}\} (\{error, tuple()\} is only for internal ConfD/NCS use). This will cause the commit of the current transaction to be aborted.
+Finally the arity-3 fun can, when Type == ?CDB_SUB_PREPARE, return an error either as `{error, binary()}` or as `{error, #confd_error{}}` (\{error, tuple()\} is only for internal ConfD/NCS use). This will cause the commit of the current transaction to be aborted.
CDB is locked for writing while config subscriptions are delivered.
-When wait/3 returns \{error, timeout\} the connection (and its subscriptions) is still active and the application needs to call wait/3 again. But if wait/3 returns ok or \{error, Reason\} the connection to ConfD is closed and all subscription points associated with it are cleared.
+When wait/3 returns `{error, timeout}` the connection (and its subscriptions) is still active and the application needs to call wait/3 again. But if wait/3 returns `ok` or `{error, Reason}` the connection to ConfD is closed and all subscription points associated with it are cleared.
### wait_start/1
diff --git a/developer-reference/erlang/econfd_notif.md b/developer-reference/erlang/econfd_notif.md
index 6a992346..e54f6754 100644
--- a/developer-reference/erlang/econfd_notif.md
+++ b/developer-reference/erlang/econfd_notif.md
@@ -200,7 +200,7 @@ Wait for an event notification message and return corresponding record depending
The logno element in the record is an integer. These integers can be used as an index to the function `econfd_logsyms:get_logsym/1` in order to get a textual description for the event.
-When recv/2 returns \{error, timeout\} the connection (and its event subscriptions) is still active and the application needs to call recv/2 again. But if recv/2 returns \{error, Reason\} the connection to ConfD is closed and all event subscriptions associated with it are cleared.
+When recv/2 returns `{error, timeout}` the connection (and its event subscriptions) is still active and the application needs to call recv/2 again. But if recv/2 returns `{error, Reason}` the connection to ConfD is closed and all event subscriptions associated with it are cleared.
### unpack_ha_node/1
diff --git a/developer-reference/pyapi/README.md b/developer-reference/pyapi/README.md
index a1e3cf22..e8b4a894 100644
--- a/developer-reference/pyapi/README.md
+++ b/developer-reference/pyapi/README.md
@@ -1,28 +1,28 @@
---
icon: square-p
---
-
# Python API Reference
Documentation for Python modules, generated from module source:
-* [ncs](ncs.md): NCS Python high level module.
-* [ncs.alarm](ncs.alarm.md): NCS Alarm Manager module.
-* [ncs.application](ncs.application.md): Module for building NCS applications.
-* [ncs.cdb](ncs.cdb.md): CDB high level module.
-* [ncs.dp](ncs.dp.md): Callback module for connecting data providers to ConfD/NCS.
-* [ncs.experimental](ncs.experimental.md): Experimental stuff.
-* [ncs.log](ncs.log.md): This module provides some logging utilities.
-* [ncs.maagic](ncs.maagic.md): Confd/NCS data access module.
-* [ncs.maapi](ncs.maapi.md): MAAPI high level module.
-* [ncs.progress](ncs.progress.md): MAAPI progress trace high level module.
-* [ncs.service\_log](ncs.service_log.md): This module provides service logging
-* [ncs.template](ncs.template.md): This module implements classes to simplify template processing.
-* [ncs.util](ncs.util.md): Utility module, low level abstrations
-* [\_ncs](_ncs.md): NCS Python low level module.
-* [\_ncs.cdb](_ncs.cdb.md): Low level module for connecting to NCS built-in XML database (CDB).
-* [\_ncs.dp](_ncs.dp.md): Low level callback module for connecting data providers to NCS.
-* [\_ncs.error](_ncs.error.md): This module defines new NCS Python API exception classes.
-* [\_ncs.events](_ncs.events.md): Low level module for subscribing to NCS event notifications.
-* [\_ncs.ha](_ncs.ha.md): Low level module for connecting to NCS HA subsystem.
-* [\_ncs.maapi](_ncs.maapi.md): Low level module for connecting to NCS with a read/write interface inside transactions.
+- [ncs](ncs.md): NCS Python high level module.
+- [ncs.alarm](ncs.alarm.md): NCS Alarm Manager module.
+- [ncs.application](ncs.application.md): Module for building NCS applications.
+- [ncs.cdb](ncs.cdb.md): CDB high level module.
+- [ncs.dp](ncs.dp.md): Callback module for connecting data providers to ConfD/NCS.
+- [ncs.experimental](ncs.experimental.md): Experimental stuff.
+- [ncs.log](ncs.log.md): This module provides some logging utilities.
+- [ncs.maagic](ncs.maagic.md): Confd/NCS data access module.
+- [ncs.maapi](ncs.maapi.md): MAAPI high level module.
+- [ncs.progress](ncs.progress.md): MAAPI progress trace high level module.
+- [ncs.service_log](ncs.service_log.md): This module provides service logging
+- [ncs.template](ncs.template.md): This module implements classes to simplify template processing.
+- [ncs.util](ncs.util.md): Utility module, low level abstrations
+- [_ncs](_ncs.md): NCS Python low level module.
+- [_ncs.cdb](_ncs.cdb.md): Low level module for connecting to NCS built-in XML database (CDB).
+- [_ncs.dp](_ncs.dp.md): Low level callback module for connecting data providers to NCS.
+- [_ncs.error](_ncs.error.md): This module defines new NCS Python API exception classes.
+- [_ncs.events](_ncs.events.md): Low level module for subscribing to NCS event notifications.
+- [_ncs.ha](_ncs.ha.md): Low level module for connecting to NCS HA subsystem.
+- [_ncs.maapi](_ncs.maapi.md): Low level module for connecting to NCS with a read/write interface
+inside transactions.
diff --git a/developer-reference/pyapi/_ncs.cdb.md b/developer-reference/pyapi/_ncs.cdb.md
index 0da7eae1..0070e26f 100644
--- a/developer-reference/pyapi/_ncs.cdb.md
+++ b/developer-reference/pyapi/_ncs.cdb.md
@@ -1,14 +1,22 @@
-# \_ncs.cdb Module
+# Python _ncs.cdb Module
Low level module for connecting to NCS built-in XML database (CDB).
-This module is used to connect to the NCS built-in XML database, CDB. The purpose of this API is to provide a read and subscription API to CDB.
+This module is used to connect to the NCS built-in XML database, CDB.
+The purpose of this API is to provide a read and subscription API to CDB.
-CDB owns and stores the configuration data and the user of the API wants to read that configuration data and also get notified when someone through either NETCONF, SNMP, the CLI, the Web UI or the MAAPI modifies the data so that the application can re-read the configuration data and act accordingly.
+CDB owns and stores the configuration data and the user of the API wants
+to read that configuration data and also get notified when someone through
+either NETCONF, SNMP, the CLI, the Web UI or the MAAPI modifies the data
+so that the application can re-read the configuration data and act
+accordingly.
-CDB can also store operational data, i.e. data which is designated with a "config false" statement in the YANG data model. Operational data can be both read and written by the applications, but NETCONF and the other northbound agents can only read the operational data.
+CDB can also store operational data, i.e. data which is designated with a
+"config false" statement in the YANG data model. Operational data can be
+both read and written by the applications, but NETCONF and the other
+northbound agents can only read the operational data.
-This documentation should be read together with the [confd\_lib\_cdb(3)](../../resources/man/confd_lib_cdb.3.md) man page.
+This documentation should be read together with the [confd_lib_cdb(3)](../../resources/man/confd_lib_cdb.3.md) man page.
## Functions
@@ -18,7 +26,8 @@ This documentation should be read together with the [confd\_lib\_cdb(3)](../../r
cd(sock, path) -> None
```
-Changes the working directory according to the format path. Note that this function can not be used as an existence test.
+Changes the working directory according to the format path. Note that
+this function can not be used as an existence test.
Keyword arguments:
@@ -31,7 +40,8 @@ Keyword arguments:
close(sock) -> None
```
-Closes the socket. end\_session() should be called before calling this function.
+Closes the socket. end_session() should be called before calling this
+function.
Keyword arguments:
@@ -43,32 +53,39 @@ Keyword arguments:
connect(sock, type, ip, port, path) -> None
```
-The application has to connect to NCS before it can interact. There are two different types of connections identified by the type argument - DATA\_SOCKET and SUBSCRIPTION\_SOCKET.
+The application has to connect to NCS before it can interact. There are two
+different types of connections identified by the type argument -
+DATA_SOCKET and SUBSCRIPTION_SOCKET.
Keyword arguments:
* sock -- a Python socket instance
-* type -- DATA\_SOCKET or SUBSCRIPTION\_SOCKET
-* ip -- the ip address if socket is AF\_INET (optional)
-* port -- the port if socket is AF\_INET (optional)
-* path -- a filename if socket is AF\_UNIX (optional).
+* type -- DATA_SOCKET or SUBSCRIPTION_SOCKET
+* ip -- the ip address if socket is AF_INET (optional)
+* port -- the port if socket is AF_INET (optional)
+* path -- a filename if socket is AF_UNIX (optional).
-### connect\_name
+### connect_name
```python
connect_name(sock, type, name, ip, port, path) -> None
```
-When we use connect() to create a connection to NCS/CDB, the name argument passed to the library initialization function confd\_init() (see [confd\_lib\_lib(3)](../../resources/man/confd_lib_lib.3.md)) is used to identify the connection in status reports and logs. I we want different names to be used for different connections from the same application process, we can use connect\_name() with the wanted name instead of connect().
+When we use connect() to create a connection to NCS/CDB, the name
+argument passed to the library initialization function confd_init() (see
+[confd_lib_lib(3)](../../resources/man/confd_lib_lib.3.md)) is used to identify the connection in status reports and
+logs. I we want different names to be used for different connections from
+the same application process, we can use connect_name() with the wanted
+name instead of connect().
Keyword arguments:
* sock -- a Python socket instance
-* type -- DATA\_SOCKET or SUBSCRIPTION\_SOCKET
+* type -- DATA_SOCKET or SUBSCRIPTION_SOCKET
* name -- the name
-* ip -- the ip address if socket is AF\_INET (optional)
-* port -- the port if socket is AF\_INET (optional)
-* path -- a filename if socket is AF\_UNIX (optional).
+* ip -- the ip address if socket is AF_INET (optional)
+* port -- the port if socket is AF_INET (optional)
+* path -- a filename if socket is AF_UNIX (optional).
### create
@@ -76,14 +93,19 @@ Keyword arguments:
create(sock, path) -> None
```
-Create a new list entry, presence container, or leaf of type empty (unless in a union, if type empty is in a union use set\_elem instead). Note that for list entries and containers, sub-elements will not exist until created or set via some of the other functions, thus doing implicit create via set\_object() or set\_values() may be preferred in this case.
+Create a new list entry, presence container, or leaf
+of type empty (unless in a union, if type empty is in a union
+use set_elem instead). Note
+that for list entries and containers, sub-elements will not exist until
+created or set via some of the other functions, thus doing implicit
+create via set_object() or set_values() may be preferred in this case.
Keyword arguments:
* sock -- a previously connected CDB socket
* path -- item to create (string)
-### cs\_node\_cd
+### cs_node_cd
```python
cs_node_cd(socket, path) -> Union[_ncs.CsNode, None]
@@ -91,7 +113,9 @@ cs_node_cd(socket, path) -> Union[_ncs.CsNode, None]
Utility function which finds the resulting CsNode given a string keypath.
-Does the same thing as \_ncs.cs\_node\_cd(), but can handle paths that are ambiguous due to traversing a mount point, by sending a request to the daemon
+Does the same thing as _ncs.cs_node_cd(), but can handle paths that are
+ambiguous due to traversing a mount point, by sending a request to the
+daemon
Keyword arguments:
@@ -104,22 +128,26 @@ Keyword arguments:
delete(sock, path) -> None
```
-Delete a list entry, presence container, or leaf of type empty, and all its child elements (if any).
+Delete a list entry, presence container, or leaf of type empty, and all
+its child elements (if any).
Keyword arguments:
* sock -- a previously connected CDB socket
* path -- item to delete (string)
-### diff\_iterate
+### diff_iterate
```python
diff_iterate(sock, subid, iter, flags, initstate) -> int
```
-After reading the subscription socket the diff\_iterate() function can be used to iterate over the changes made in CDB data that matched the particular subscription point given by subid.
+After reading the subscription socket the diff_iterate() function can be
+used to iterate over the changes made in CDB data that matched the
+particular subscription point given by subid.
-The user defined function iter() will be called for each element that has been modified and matches the subscription.
+The user defined function iter() will be called for each element that has
+been modified and matches the subscription.
This function will return the last return value from the iter() callback.
@@ -130,11 +158,11 @@ Keyword arguments:
* iter -- iterator function (see below)
* initstate -- opaque passed to iter function
-The user defined function iter() will be called for each element that has been modified and matches the subscription. It must have the following signature:
+The user defined function iter() will be called for each element that has
+been modified and matches the subscription. It must have the following
+signature:
-```
-iter_fn(kp, op, oldv, newv, state) -> int
-```
+ iter_fn(kp, op, oldv, newv, state) -> int
Where arguments are:
@@ -144,13 +172,19 @@ Where arguments are:
* newv - the new value or None
* state - the initstate object
-### diff\_iterate\_resume
+### diff_iterate_resume
```python
diff_iterate_resume(sock, reply, iter, resumestate) -> int
```
-The application must call this function whenever an iterator function has returned ITER\_SUSPEND to finish up the iteration. If the application does not wish to continue iteration it must at least call diff\_iterate\_resume(sock, ITER\_STOP, None, None) to clean up the state. The reply parameter is what the iterator function would have returned (i.e. normally ITER\_RECURSE or ITER\_CONTINUE) if it hadn't returned ITER\_SUSPEND.
+The application must call this function whenever an iterator function has
+returned ITER_SUSPEND to finish up the iteration. If the application does
+not wish to continue iteration it must at least call
+diff_iterate_resume(sock, ITER_STOP, None, None) to clean up the state.
+The reply parameter is what the iterator function would have returned
+(i.e. normally ITER_RECURSE or ITER_CONTINUE) if it hadn't returned
+ITER_SUSPEND.
This function will return the last return value from the iter() callback.
@@ -158,16 +192,19 @@ Keyword arguments:
* sock -- a previously connected CDB socket
* reply -- the reply value
-* iter -- iterator function (see diff\_iterate())
+* iter -- iterator function (see diff_iterate())
* resumestate -- opaque passed to iter function
-### end\_session
+### end_session
```python
end_session(sock) -> None
```
-We use connect() to establish a read socket to CDB. When the socket is closed, the read session is ended. We can reuse the same socket for another read session, but we must then end the session and create another session using start\_session().
+We use connect() to establish a read socket to CDB. When the socket is
+closed, the read session is ended. We can reuse the same socket for another
+read session, but we must then end the session and create another session
+using start_session().
Keyword arguments:
@@ -179,7 +216,9 @@ Keyword arguments:
exists(sock, path) -> bool
```
-Leafs in the data model may be optional, and presence containers and list entries may or may not exist. This function checks whether a node exists in CDB.
+Leafs in the data model may be optional, and presence containers and list
+entries may or may not exist. This function checks whether a node exists
+in CDB.
Keyword arguments:
@@ -192,20 +231,23 @@ Keyword arguments:
get(sock, path) -> _ncs.Value
```
-This reads a a value from the path and returns the result. The path must lead to a leaf element in the XML data tree.
+This reads a a value from the path and returns the result. The path must
+lead to a leaf element in the XML data tree.
Keyword arguments:
* sock -- a previously connected CDB socket
* path -- path to leaf
-### get\_case
+### get_case
```python
get_case(sock, choice, path) -> None
```
-When we use the YANG choice statement in the data model, this function can be used to find the currently selected case, avoiding useless get() etc requests for elements that belong to other cases.
+When we use the YANG choice statement in the data model, this function
+can be used to find the currently selected case, avoiding useless
+get() etc requests for elements that belong to other cases.
Keyword arguments:
@@ -213,7 +255,7 @@ Keyword arguments:
* choice -- the choice (string)
* path -- path to container or list entry where choice is defined (string)
-### get\_compaction\_info
+### get_compaction_info
```python
get_compaction_info(sock, dbfile) -> dict
@@ -223,29 +265,32 @@ Returns the compaction information on the given CDB file.
The return value is a dict of the form:
-```
-{
- 'fsize_previous': fsize_previous,
- 'fsize_current': fsize_current,
- 'last_time': last_time,
- 'ntrans': ntrans
-}
-```
+ {
+ 'fsize_previous': fsize_previous,
+ 'fsize_current': fsize_current,
+ 'last_time': last_time,
+ 'ntrans': ntrans
+ }
In this dict all values are integers.
Keyword arguments:
* sock -- a previously connected CDB socket
-* dbfile -- A\_CDB, O\_CDB or S\_CDB.
+* dbfile -- A_CDB, O_CDB or S_CDB.
-### get\_modifications
+### get_modifications
```python
get_modifications(sock, subid, flags, path) -> list
```
-The get\_modifications() function can be called after reception of a subscription notification to retrieve all the changes that caused the subscription notification. The socket sock is the subscription socket. The subscription id must also be provided. Optionally a path can be used to limit what is returned further (only changes below the supplied path will be returned), if this isn't needed path can be set to None.
+The get_modifications() function can be called after reception of a
+subscription notification to retrieve all the changes that caused the
+subscription notification. The socket sock is the subscription socket. The
+subscription id must also be provided. Optionally a path can be used to
+limit what is returned further (only changes below the supplied path will
+be returned), if this isn't needed path can be set to None.
Keyword arguments:
@@ -254,13 +299,16 @@ Keyword arguments:
* flags -- the flags
* path -- a path in string format or None
-### get\_modifications\_cli
+### get_modifications_cli
```python
get_modifications_cli(sock, subid, flags) -> str
```
-The get\_modifications\_cli() function can be called after reception of a subscription notification to retrieve all the changes that caused the subscription notification as a string in Cisco CLI format. The socket sock is the subscription socket. The subscription id must also be provided.
+The get_modifications_cli() function can be called after reception of
+a subscription notification to retrieve all the changes that caused the
+subscription notification as a string in Cisco CLI format. The socket sock
+is the subscription socket. The subscription id must also be provided.
Keyword arguments:
@@ -268,26 +316,31 @@ Keyword arguments:
* subid -- subscription id
* flags -- the flags
-### get\_modifications\_iter
+### get_modifications_iter
```python
get_modifications_iter(sock, flags) -> list
```
-The get\_modifications\_iter() is basically a convenient short-hand of the get\_modifications() function intended to be used from within a iteration function started by diff\_iterate(). In this case no subscription id is needed, and the path is implicitly the current position in the iteration.
+The get_modifications_iter() is basically a convenient short-hand of
+the get_modifications() function intended to be used from within a
+iteration function started by diff_iterate(). In this case no subscription
+id is needed, and the path is implicitly the current position in the
+iteration.
Keyword arguments:
* sock -- a previously connected CDB socket
* flags -- the flags
-### get\_object
+### get_object
```python
get_object(sock, n, path) -> list
```
-This function reads at most n values from the container or list entry specified by the path, and returns them as a list of Value's.
+This function reads at most n values from the container or list entry
+specified by the path, and returns them as a list of Value's.
Keyword arguments:
@@ -295,13 +348,18 @@ Keyword arguments:
* n -- max number of values to read
* path -- path to a list entry or a container (string)
-### get\_objects
+### get_objects
```python
get_objects(sock, n, ix, nobj, path) -> list
```
-Similar to get\_object(), but reads multiple entries of a list based on the "instance integer" otherwise given within square brackets in the path - here the path must specify the list without the instance integer. At most n values from each of nobj entries, starting at entry ix, are read and placed in the values array. The return value is a list of objects where each object is represented as a list of Values.
+Similar to get_object(), but reads multiple entries of a list based
+on the "instance integer" otherwise given within square brackets in the
+path - here the path must specify the list without the instance integer.
+At most n values from each of nobj entries, starting at entry ix, are
+read and placed in the values array. The return value is a list of objects
+where each object is represented as a list of Values.
Keyword arguments:
@@ -311,102 +369,128 @@ Keyword arguments:
* nobj -- number of objects to read
* path -- path to a list entry or a container (string)
-### get\_phase
+### get_phase
```python
get_phase(sock) -> dict
```
-Returns the start-phase that CDB is currently in. The return value is a dict of the form:
+Returns the start-phase that CDB is currently in. The return value is a
+dict of the form:
-```
-{
- 'phase': phase,
- 'flags': flags,
- 'init': init,
- 'upgrade': upgrade
-}
-```
+ {
+ 'phase': phase,
+ 'flags': flags,
+ 'init': init,
+ 'upgrade': upgrade
+ }
-In this dict 'phase' and 'flags' are integers, while 'init' and 'upgrade' are booleans.
+In this dict 'phase' and 'flags' are integers, while 'init' and 'upgrade'
+are booleans.
Keyword arguments:
* sock -- a previously connected CDB socket
-### get\_replay\_txids
+### get_replay_txids
```python
get_replay_txids(sock) -> List[Tuple]
```
-When the subscriptionReplay functionality is enabled in confd.conf this function returns the list of available transactions that CDB can replay. The current transaction id will be the first in the list, the second at txid\[1] and so on. In case there are no replay transactions available (the feature isn't enabled or there hasn't been any transactions yet) only one (the current) transaction id is returned.
+When the subscriptionReplay functionality is enabled in confd.conf this
+function returns the list of available transactions that CDB can replay.
+The current transaction id will be the first in the list, the second at
+txid[1] and so on. In case there are no replay transactions available (the
+feature isn't enabled or there hasn't been any transactions yet) only one
+(the current) transaction id is returned.
-The returned list contains tuples with the form (s1, s2, s3, primary) where s1, s2 and s3 are unsigned integers and primary is either a string or None.
+The returned list contains tuples with the form (s1, s2, s3, primary) where
+s1, s2 and s3 are unsigned integers and primary is either a string or None.
Keyword arguments:
* sock -- a previously connected CDB socket
-### get\_transaction\_handle
+### get_transaction_handle
```python
get_transaction_handle(sock) -> int
```
-Returns the transaction handle for the transaction that triggered the current subscription notification. This function uses a subscription socket, and can only be called when a subscription notification for configuration data has been received on that socket, before sync\_subscription\_socket() has been called. Additionally, it is not possible to call this function from the iter() function passed to diff\_iterate().
+Returns the transaction handle for the transaction that triggered the
+current subscription notification. This function uses a subscription
+socket, and can only be called when a subscription notification for
+configuration data has been received on that socket, before
+sync_subscription_socket() has been called. Additionally, it is not
+possible to call this function from the iter() function passed to
+diff_iterate().
Note:
-
-> A CDB client is not expected to access the ConfD transaction store directly - this function should only be used for logging or debugging purposes.
+> A CDB client is not expected to access the ConfD transaction store
+> directly - this function should only be used for logging or debugging
+> purposes.
Note:
-
-> When the ConfD High Availability functionality is used, the transaction information is not available on secondary nodes.
+> When the ConfD High Availability functionality is used, the
+> transaction information is not available on secondary nodes.
Keyword arguments:
* sock -- a previously connected CDB socket
-### get\_txid
+### get_txid
```python
get_txid(sock) -> tuple
```
-Read the last transaction id from CDB. This function can be used if we are forced to reconnect to CDB. If the transaction id we read is identical to the last id we had prior to loosing the CDB sockets we don't have to reload our managed object data. See the User Guide for full explanation.
+Read the last transaction id from CDB. This function can be used if we are
+forced to reconnect to CDB. If the transaction id we read is identical to
+the last id we had prior to loosing the CDB sockets we don't have to reload
+our managed object data. See the User Guide for full explanation.
-The returned tuple has the form (s1, s2, s3, primary) where s1, s2 and s3 are unsigned integers and primary is either a string or None.
+The returned tuple has the form (s1, s2, s3, primary) where s1, s2 and s3
+are unsigned integers and primary is either a string or None.
Keyword arguments:
* sock -- a previously connected CDB socket
-### get\_user\_session
+### get_user_session
```python
get_user_session(sock) -> int
```
-Returns the user session id for the transaction that triggered the current subscription notification. This function uses a subscription socket, and can only be called when a subscription notification for configuration data has been received on that socket, before sync\_subscription\_socket() has been called. Additionally, it is not possible to call this function from the iter() function passed to diff\_iterate(). To retrieve full information about the user session, use \_maapi.get\_user\_session() (see [confd\_lib\_maapi(3)](../../resources/man/confd_lib_maapi.3.md)).
+Returns the user session id for the transaction that triggered the
+current subscription notification. This function uses a subscription
+socket, and can only be called when a subscription notification for
+configuration data has been received on that socket, before
+sync_subscription_socket() has been called. Additionally, it is not
+possible to call this function from the iter() function passed to
+diff_iterate(). To retrieve full information about the user session,
+use _maapi.get_user_session() (see [confd_lib_maapi(3)](../../resources/man/confd_lib_maapi.3.md)).
Note:
-
-> When the ConfD High Availability functionality is used, the user session information is not available on secondary nodes.
+> When the ConfD High Availability functionality is used, the
+> user session information is not available on secondary nodes.
Keyword arguments:
* sock -- a previously connected CDB socket
-### get\_values
+### get_values
```python
get_values(sock, values, path) -> list
```
-Read an arbitrary set of sub-elements of a container or list entry. The values list must be pre-populated with a number of TagValue instances.
+Read an arbitrary set of sub-elements of a container or list entry. The
+values list must be pre-populated with a number of TagValue instances.
-TagValues passed in the values list will be updated with the corresponding values read and a new values list will be returned.
+TagValues passed in the values list will be updated with the corresponding
+values read and a new values list will be returned.
Keyword arguments:
@@ -420,19 +504,24 @@ Keyword arguments:
getcwd(sock) -> str
```
-Returns the current position as previously set by cd(), pushd(), or popd() as a string path. Note that what is returned is a pretty-printed version of the internal representation of the current position. It will be the shortest unique way to print the path but it might not exactly match the string given to cd().
+Returns the current position as previously set by cd(), pushd(), or popd()
+as a string path. Note that what is returned is a pretty-printed version of
+the internal representation of the current position. It will be the shortest
+unique way to print the path but it might not exactly match the string given
+to cd().
Keyword arguments:
* sock -- a previously connected CDB socket
-### getcwd\_kpath
+### getcwd_kpath
```python
getcwd_kpath(sock) -> _ncs.HKeypathRef
```
-Returns the current position like getcwd(), but as a HKeypathRef instead of as a string.
+Returns the current position like getcwd(), but as a HKeypathRef
+instead of as a string.
Keyword arguments:
@@ -451,71 +540,87 @@ Keyword arguments:
* sock -- a previously connected CDB socket
* path -- path to list entry
-### initiate\_journal\_compaction
+### initiate_journal_compaction
```python
initiate_journal_compaction(sock) -> None
```
-Normally CDB handles journal compaction of the config datastore automatically. If this has been turned off (in the configuration file) then the A.cdb file will grow indefinitely unless this API function is called periodically to initiate compaction. This function initiates a compaction and returns immediately (if the datastore is locked, the compaction will be delayed, but eventually compaction will take place). Calling this function when journal compaction is configured to be automatic has no effect.
+Normally CDB handles journal compaction of the config datastore
+automatically. If this has been turned off (in the configuration file)
+then the A.cdb file will grow indefinitely unless this API function is
+called periodically to initiate compaction. This function initiates a
+compaction and returns immediately (if the datastore is locked, the
+compaction will be delayed, but eventually compaction will take place).
+Calling this function when journal compaction is configured to be automatic
+has no effect.
Keyword arguments:
* sock -- a previously connected CDB socket
-### initiate\_journal\_dbfile\_compaction
+### initiate_journal_dbfile_compaction
```python
initiate_journal_dbfile_compaction(sock, dbfile) -> None
```
-Similar to initiate\_journal\_compaction() but initiates the compaction on the given CDB file instead of all CDB files.
+Similar to initiate_journal_compaction() but initiates the compaction
+on the given CDB file instead of all CDB files.
Keyword arguments:
* sock -- a previously connected CDB socket
-* dbfile -- A\_CDB, O\_CDB or S\_CDB.
+* dbfile -- A_CDB, O_CDB or S_CDB.
-### is\_default
+### is_default
```python
is_default(sock, path) -> bool
```
-This function returns True for a leaf which has a default value defined in the data model when no value has been set, i.e. when the default value is in effect. It returns False for other existing leafs. There is normally no need to call this function, since CDB automatically provides the default value as needed when get() etc is called.
+This function returns True for a leaf which has a default value defined in
+the data model when no value has been set, i.e. when the default value is
+in effect. It returns False for other existing leafs.
+There is normally no need to call this function, since CDB automatically
+provides the default value as needed when get() etc is called.
Keyword arguments:
* sock -- a previously connected CDB socket
* path -- path to leaf
-### mandatory\_subscriber
+### mandatory_subscriber
```python
mandatory_subscriber(sock, name) -> None
```
-Attaches a mandatory attribute and a mandatory name to the subscriber identified by sock. The name argument is distinct from the name argument in connect\_name().
+Attaches a mandatory attribute and a mandatory name to the subscriber
+identified by sock. The name argument is distinct from the name argument
+in connect_name().
Keyword arguments:
* sock -- a previously connected CDB socket
* name -- the name
-### next\_index
+### next_index
```python
next_index(sock, path) -> int
```
-Given a path to a list entry next\_index() returns the position (starting from 0) of the next entry (regardless of whether the path exists or not).
+Given a path to a list entry next_index() returns the position
+(starting from 0) of the next entry (regardless of whether the path
+exists or not).
Keyword arguments:
* sock -- a previously connected CDB socket
* path -- path to list entry
-### num\_instances
+### num_instances
```python
num_instances(sock, path) -> int
@@ -528,13 +633,16 @@ Keyword arguments:
* sock -- a previously connected CDB socket
* path -- path to list node
-### oper\_subscribe
+### oper_subscribe
```python
oper_subscribe(sock, nspace, path) -> int
```
-Sets up a CDB subscription for changes in the operational database. Similar to the subscriptions for configuration data, we can be notified of changes to the operational data stored in CDB. Note that there are several differences from the subscriptions for configuration data.
+Sets up a CDB subscription for changes in the operational database.
+Similar to the subscriptions for configuration data, we can be notified
+of changes to the operational data stored in CDB. Note that there are
+several differences from the subscriptions for configuration data.
Keyword arguments:
@@ -548,7 +656,8 @@ Keyword arguments:
popd(sock) -> None
```
-Pops the top element from the directory stack and changes directory to previous directory.
+Pops the top element from the directory stack and changes directory to
+previous directory.
Keyword arguments:
@@ -567,51 +676,59 @@ Keyword arguments:
* sock -- a previously connected CDB socket
* path -- path to cd to
-### read\_subscription\_socket
+### read_subscription_socket
```python
read_subscription_socket(sock) -> list
```
-This call will return a list of integer values containing subscription points earlier acquired through calls to subscribe().
+This call will return a list of integer values containing subscription
+points earlier acquired through calls to subscribe().
Keyword arguments:
* sock -- a previously connected CDB socket
-### read\_subscription\_socket2
+### read_subscription_socket2
```python
read_subscription_socket2(sock) -> tuple
```
-Another version of read\_subscription\_socket() which will return a 3-tuple in the form (type, flags, subpoints).
+Another version of read_subscription_socket() which will return a 3-tuple
+in the form (type, flags, subpoints).
Keyword arguments:
* sock -- a previously connected CDB socket
-### replay\_subscriptions
+### replay_subscriptions
```python
replay_subscriptions(sock, txid, sub_points) -> None
```
-This function makes it possible to replay the subscription events for the last configuration change to some or all CDB subscribers. This call is useful in a number of recovery scenarios, where some CDB subscribers lost connection to ConfD before having received all the changes in a transaction. The replay functionality is only available if it has been enabled in confd.conf.
+This function makes it possible to replay the subscription events for the
+last configuration change to some or all CDB subscribers. This call is
+useful in a number of recovery scenarios, where some CDB subscribers lost
+connection to ConfD before having received all the changes in a
+transaction. The replay functionality is only available if it has been
+enabled in confd.conf.
Keyword arguments:
* sock -- a previously connected CDB socket
* txid -- a 4-tuple of the form (s1, s2, s3, primary)
-* sub\_points -- a list of subscription points
+* sub_points -- a list of subscription points
-### set\_case
+### set_case
```python
set_case(sock, choice, scase, path) -> None
```
-When we use the YANG choice statement in the data model, this function can be used to select the current case.
+When we use the YANG choice statement in the data model, this function
+can be used to select the current case.
Keyword arguments:
@@ -620,13 +737,14 @@ Keyword arguments:
* scase -- the case (string)
* path -- path to container or list entry where choice is defined (string)
-### set\_elem
+### set_elem
```python
set_elem(sock, value, path) -> None
```
-Set the value of a single leaf. The value may be either a Value instance or a string.
+Set the value of a single leaf. The value may be either a Value instance or
+a string.
Keyword arguments:
@@ -634,26 +752,30 @@ Keyword arguments:
* value -- the value to set
* path -- a string pointing to a single leaf
-### set\_namespace
+### set_namespace
```python
set_namespace(sock, hashed_ns) -> None
```
-If we want to access data in CDB where the toplevel element name is not unique, we need to set the namespace. We are reading data related to a specific .fxs file. confdc can be used to generate a .py file with a class for the namespace, by the flag --emit-python to confdc (see confdc(1)).
+If we want to access data in CDB where the toplevel element name is not
+unique, we need to set the namespace. We are reading data related to a
+specific .fxs file. confdc can be used to generate a .py file with a class
+for the namespace, by the flag --emit-python to confdc (see confdc(1)).
Keyword arguments:
* sock -- a previously connected CDB socket
-* hashed\_ns -- the namespace hash
+* hashed_ns -- the namespace hash
-### set\_object
+### set_object
```python
set_object(sock, values, path) -> None
```
-Set all elements corresponding to the complete contents of a container or list entry, except for sub-lists.
+Set all elements corresponding to the complete contents of a container or
+list entry, except for sub-lists.
Keyword arguments:
@@ -661,20 +783,25 @@ Keyword arguments:
* values -- a list of Value:s
* path -- path to container or list entry (string)
-### set\_timeout
+### set_timeout
```python
set_timeout(sock, timeout_secs) -> None
```
-A timeout for client actions can be specified via /confdConfig/cdb/clientTimeout in confd.conf, see the confd.conf(5) manual page. This function can be used to dynamically extend (or shorten) the timeout for the current action. Thus it is possible to configure a restrictive timeout in confd.conf, but still allow specific actions to have a longer execution time.
+A timeout for client actions can be specified via
+/confdConfig/cdb/clientTimeout in confd.conf, see the confd.conf(5)
+manual page. This function can be used to dynamically extend (or shorten)
+the timeout for the current action. Thus it is possible to configure a
+restrictive timeout in confd.conf, but still allow specific actions to
+have a longer execution time.
Keyword arguments:
* sock -- a previously connected CDB socket
-* timeout\_secs -- timeout in seconds
+* timeout_secs -- timeout in seconds
-### set\_values
+### set_values
```python
set_values(sock, values, path) -> None
@@ -688,26 +815,32 @@ Keyword arguments:
* values -- a list of TagValue:s
* path -- path to container or list entry (string)
-### start\_session
+### start_session
```python
start_session(sock, db) -> None
```
-Starts a new session on an already established socket to CDB. The db parameter should be one of RUNNING, PRE\_COMMIT\_RUNNING, STARTUP and OPERATIONAL.
+Starts a new session on an already established socket to CDB. The db
+parameter should be one of RUNNING, PRE_COMMIT_RUNNING, STARTUP and
+OPERATIONAL.
Keyword arguments:
* sock -- a previously connected CDB socket
* db -- the database
-### start\_session2
+### start_session2
```python
start_session2(sock, db, flags) -> None
```
-This function may be used instead of start\_session() if it is considered necessary to have more detailed control over some aspects of the CDB session - if in doubt, use start\_session() instead. The sock and db arguments are the same as for start\_session(), and these values can be used for flags (ORed together if more than one).
+This function may be used instead of start_session() if it is considered
+necessary to have more detailed control over some aspects of the CDB
+session - if in doubt, use start_session() instead. The sock and db
+arguments are the same as for start_session(), and these values can be used
+for flags (ORed together if more than one).
Keyword arguments:
@@ -715,46 +848,54 @@ Keyword arguments:
* db -- the database
* flags -- the flags
-### sub\_abort\_trans
+### sub_abort_trans
```python
sub_abort_trans(sock, code, apptag_ns, apptag_tag, reason) -> None
```
-This function is to be called instead of sync\_subscription\_socket() when the subscriber wishes to abort the current transaction. It is only valid to call after read\_subscription\_socket2() has returned with type set to CDB\_SUB\_PREPARE. The arguments after sock are the same as to X\_seterr\_extended() and give the caller a way of indicating the reason for the failure.
+This function is to be called instead of sync_subscription_socket()
+when the subscriber wishes to abort the current transaction. It is only
+valid to call after read_subscription_socket2() has returned with
+type set to CDB_SUB_PREPARE. The arguments after sock are the same as to
+X_seterr_extended() and give the caller a way of indicating the
+reason for the failure.
Keyword arguments:
* sock -- a previously connected CDB socket
* code -- the error code
-* apptag\_ns -- the namespace hash
-* apptag\_tag -- the tag hash
+* apptag_ns -- the namespace hash
+* apptag_tag -- the tag hash
* reason -- reason string
-### sub\_abort\_trans\_info
+### sub_abort_trans_info
```python
sub_abort_trans_info(sock, code, apptag_ns, apptag_tag, error_info, reason) -> None
```
-Same a sub\_abort\_trans() but also fills in the NETCONF element.
+Same a sub_abort_trans() but also fills in the NETCONF element.
Keyword arguments:
* sock -- a previously connected CDB socket
* code -- the error code
-* apptag\_ns -- the namespace hash
-* apptag\_tag -- the tag hash
-* error\_info -- a list of TagValue instances
+* apptag_ns -- the namespace hash
+* apptag_tag -- the tag hash
+* error_info -- a list of TagValue instances
* reason -- reason string
-### sub\_progress
+### sub_progress
```python
sub_progress(sock, msg) -> None
```
-After receiving a subscription notification (using read\_subscription\_socket()) but before acknowledging it (or aborting, in the case of prepare subscriptions), it is possible to send progress reports back to ConfD using the sub\_progress() function.
+After receiving a subscription notification (using
+read_subscription_socket()) but before acknowledging it (or aborting,
+in the case of prepare subscriptions), it is possible to send progress
+reports back to ConfD using the sub_progress() function.
Keyword arguments:
@@ -767,7 +908,11 @@ Keyword arguments:
subscribe(sock, prio, nspace, path) -> int
```
-Sets up a CDB subscription so that we are notified when CDB configuration data changes. There can be multiple subscription points from different sources, that is a single client daemon can have many subscriptions and there can be many client daemons. The return value is a subscription point used to identify this particular subscription.
+Sets up a CDB subscription so that we are notified when CDB configuration
+data changes. There can be multiple subscription points from different
+sources, that is a single client daemon can have many subscriptions and
+there can be many client daemons. The return value is a subscription point
+used to identify this particular subscription.
Keyword arguments:
@@ -782,7 +927,13 @@ Keyword arguments:
subscribe2(sock, type, flags, prio, nspace, path) -> int
```
-This function supersedes the current subscribe() and oper\_subscribe() as well as makes it possible to use the new two phase subscription method. Operational and configuration subscriptions can be done on the same socket, but in that case the notifications may be arbitrarily interleaved, including operational notifications arriving between different configuration notifications for the same transaction. If this is a problem, use separate sockets for operational and configuration subscriptions.
+This function supersedes the current subscribe() and oper_subscribe() as
+well as makes it possible to use the new two phase subscription method.
+Operational and configuration subscriptions can be done on the same
+socket, but in that case the notifications may be arbitrarily interleaved,
+including operational notifications arriving between different configuration
+notifications for the same transaction. If this is a problem, use separate
+sockets for operational and configuration subscriptions.
Keyword arguments:
@@ -793,70 +944,90 @@ Keyword arguments:
* nspace -- the namespace hash
* path -- path to node
-### subscribe\_done
+### subscribe_done
```python
subscribe_done(sock) -> None
```
-When a client is done registering all its subscriptions on a particular subscription socket it must call subscribe\_done(). No notifications will be delivered until then.
+When a client is done registering all its subscriptions on a particular
+subscription socket it must call subscribe_done(). No notifications will be
+delivered until then.
Keyword arguments:
* sock -- a previously connected CDB socket
-### sync\_subscription\_socket
+### sync_subscription_socket
```python
sync_subscription_socket(sock, st) -> None
```
-Once we have read the subscription notification through a call to read\_subscription\_socket() and optionally used the diff\_iterate() to iterate through the changes as well as acted on the changes to CDB, we must synchronize with CDB so that CDB can continue and deliver further subscription messages to subscribers with higher priority numbers.
+Once we have read the subscription notification through a call to
+read_subscription_socket() and optionally used the diff_iterate()
+to iterate through the changes as well as acted on the changes to CDB, we
+must synchronize with CDB so that CDB can continue and deliver further
+subscription messages to subscribers with higher priority numbers.
Keyword arguments:
* sock -- a previously connected CDB socket
* st -- sync type (int)
-### trigger\_oper\_subscriptions
+### trigger_oper_subscriptions
```python
trigger_oper_subscriptions(sock, sub_points, flags) -> None
```
-This function works like trigger\_subscriptions(), but for CDB subscriptions to operational data. The caller will trigger all subscription points passed in the sub\_points list (or all operational data subscribers if the list is empty), and the call will not return until the last subscriber has called sync\_subscription\_socket().
+This function works like trigger_subscriptions(), but for CDB
+subscriptions to operational data. The caller will trigger all
+subscription points passed in the sub_points list (or all operational
+data subscribers if the list is empty), and the call will not return until
+the last subscriber has called sync_subscription_socket().
Keyword arguments:
* sock -- a previously connected CDB socket
-* sub\_points -- a list of subscription points
+* sub_points -- a list of subscription points
* flags -- the flags
-### trigger\_subscriptions
+### trigger_subscriptions
```python
trigger_subscriptions(sock, sub_points) -> None
```
-This function makes it possible to trigger CDB subscriptions for configuration data even though the configuration has not been modified. The caller will trigger all subscription points passed in the sub\_points list (or all subscribers if the list is empty) in priority order, and the call will not return until the last subscriber has called sync\_subscription\_socket().
+This function makes it possible to trigger CDB subscriptions for
+configuration data even though the configuration has not been modified.
+The caller will trigger all subscription points passed in the sub_points
+list (or all subscribers if the list is empty) in priority order, and the
+call will not return until the last subscriber has called
+sync_subscription_socket().
Keyword arguments:
* sock -- a previously connected CDB socket
-* sub\_points -- a list of subscription points
+* sub_points -- a list of subscription points
-### wait\_start
+### wait_start
```python
wait_start(sock) -> None
```
-This call waits until CDB has completed start-phase 1 and is available, when it is CONFD\_OK is returned. If CDB already is available (i.e. start-phase >= 1) the call returns immediately. This can be used by a CDB client who is not synchronously started and only wants to wait until it can read its configuration. The call can be used after connect().
+This call waits until CDB has completed start-phase 1 and is available,
+when it is CONFD_OK is returned. If CDB already is available (i.e.
+start-phase >= 1) the call returns immediately. This can be used by a CDB
+client who is not synchronously started and only wants to wait until it
+can read its configuration. The call can be used after connect().
Keyword arguments:
* sock -- a previously connected CDB socket
+
## Predefined Values
```python
diff --git a/developer-reference/pyapi/_ncs.dp.md b/developer-reference/pyapi/_ncs.dp.md
index 4428cb63..b461a257 100644
--- a/developer-reference/pyapi/_ncs.dp.md
+++ b/developer-reference/pyapi/_ncs.dp.md
@@ -1,108 +1,128 @@
-# \_ncs.dp Module
+# Python _ncs.dp Module
Low level callback module for connecting data providers to NCS.
-This module is used to connect to the NCS Data Provider API. The purpose of this API is to provide callback hooks so that user-written data providers can provide data stored externally to NCS. NCS needs this information in order to drive its northbound agents.
+This module is used to connect to the NCS Data Provider
+API. The purpose of this API is to provide callback hooks so that
+user-written data providers can provide data stored externally to NCS.
+NCS needs this information in order to drive its northbound agents.
-The module is also used to populate items in the data model which are not data or configuration items, such as statistics items from the device.
+The module is also used to populate items in the data model which are not
+data or configuration items, such as statistics items from the device.
-The module consists of a number of API functions whose purpose is to install different callback functions at different points in the data model tree which is the representation of the device configuration. Read more about callpoints in tailf\_yang\_extensions(5). Read more about how to use the module in the User Guide chapters on Operational data and External data.
+The module consists of a number of API functions whose purpose is to
+install different callback functions at different points in the data model
+tree which is the representation of the device configuration. Read more
+about callpoints in tailf_yang_extensions(5). Read more about how to use
+the module in the User Guide chapters on Operational data and External
+data.
-This documentation should be read together with the [confd\_lib\_dp(3)](../../resources/man/confd_lib_dp.3.md) man page.
+This documentation should be read together with the [confd_lib_dp(3)](../../resources/man/confd_lib_dp.3.md) man page.
## Functions
-### aaa\_reload
+### aaa_reload
```python
aaa_reload(tctx) -> None
```
-When the ConfD AAA tree is populated by an external data provider (see the AAA chapter in the User Guide), this function can be used by the data provider to notify ConfD when there is a change to the AAA data.
+When the ConfD AAA tree is populated by an external data provider (see the
+AAA chapter in the User Guide), this function can be used by the data
+provider to notify ConfD when there is a change to the AAA data.
Keyword arguments:
* tctx -- a transaction context
-### access\_reply\_result
+### access_reply_result
```python
access_reply_result(actx, result) -> None
```
-The callbacks must call this function to report the result of the access check to ConfD, and should normally return CONFD\_OK. If any other value is returned, it will cause the access check to be rejected.
+The callbacks must call this function to report the result of the access
+check to ConfD, and should normally return CONFD_OK. If any other value is
+returned, it will cause the access check to be rejected.
Keyword arguments:
* actx -- the authorization context
-* result -- the result (ACCESS\_RESULT\_xxx)
+* result -- the result (ACCESS_RESULT_xxx)
-### action\_delayed\_reply\_error
+### action_delayed_reply_error
```python
action_delayed_reply_error(uinfo, errstr) -> None
```
-If we use the CONFD\_DELAYED\_RESPONSE as a return value from the action callback, we must later asynchronously reply. This function is used to reply with error.
+If we use the CONFD_DELAYED_RESPONSE as a return value from the action
+callback, we must later asynchronously reply. This function is used to
+reply with error.
Keyword arguments:
* uinfo -- a user info context
* errstr -- an error string
-### action\_delayed\_reply\_ok
+### action_delayed_reply_ok
```python
action_delayed_reply_ok(uinfo) -> None
```
-If we use the CONFD\_DELAYED\_RESPONSE as a return value from the action callback, we must later asynchronously reply. This function is used to reply with success.
+If we use the CONFD_DELAYED_RESPONSE as a return value from the action
+callback, we must later asynchronously reply. This function is used to
+reply with success.
Keyword arguments:
* uinfo -- a user info context
-### action\_reply\_command
+### action_reply_command
```python
action_reply_command(uinfo, values) -> None
```
-If a CLI callback command should return data, it must invoke this function in response to the cb\_command() callback.
+If a CLI callback command should return data, it must invoke this function
+in response to the cb_command() callback.
Keyword arguments:
* uinfo -- a user info context
* values -- a list of strings or None
-### action\_reply\_completion
+### action_reply_completion
```python
action_reply_completion(uinfo, values) -> None
```
-This function must normally be called in response to the cb\_completion() callback.
+This function must normally be called in response to the cb_completion()
+callback.
Keyword arguments:
* uinfo -- a user info context
* values -- a list of 3-tuples or None (see below)
-The values argument must be None or a list of 3-tuples where each tuple is built up like:
+The values argument must be None or a list of 3-tuples where each tuple is
+built up like:
-```
-(type::int, value::string, extra::string)
-```
+ (type::int, value::string, extra::string)
The third item of the tuple (extra) may be set to None.
-### action\_reply\_range\_enum
+### action_reply_range_enum
```python
action_reply_range_enum(uinfo, values, keysize) -> None
```
-This function must be called in response to the cb\_completion() callback when it is invoked via a tailf:cli-custom-range-enumerator statement in the data model.
+This function must be called in response to the cb_completion() callback
+when it is invoked via a tailf:cli-custom-range-enumerator statement in the
+data model.
Keyword arguments:
@@ -110,15 +130,19 @@ Keyword arguments:
* values -- a list of keys as strings or None
* keysize -- number of keys for the list in the data model
-The values argument is a flat list of keys. If the list in the data model specifies multiple keys this list is still flat. The keysize argument tells us how many keys to use for each list element. So the size of values should be a multiple of keysize.
+The values argument is a flat list of keys. If the list in the data model
+specifies multiple keys this list is still flat. The keysize argument
+tells us how many keys to use for each list element. So the size of values
+should be a multiple of keysize.
-### action\_reply\_rewrite
+### action_reply_rewrite
```python
action_reply_rewrite(uinfo, values, unhides) -> None
```
-This function can be called instead of action\_reply\_command() as a response to a show path rewrite callback invocation.
+This function can be called instead of action_reply_command() as a
+response to a show path rewrite callback invocation.
Keyword arguments:
@@ -126,13 +150,14 @@ Keyword arguments:
* values -- a list of strings or None
* unhides -- a list of strings or None
-### action\_reply\_rewrite2
+### action_reply_rewrite2
```python
action_reply_rewrite2(uinfo, values, unhides, selects) -> None
```
-This function can be called instead of action\_reply\_command() as a response to a show path rewrite callback invocation.
+This function can be called instead of action_reply_command() as a
+response to a show path rewrite callback invocation.
Keyword arguments:
@@ -141,104 +166,115 @@ Keyword arguments:
* unhides -- a list of strings or None
* selects -- a list of strings or None
-### action\_reply\_values
+### action_reply_values
```python
action_reply_values(uinfo, values) -> None
```
-If the action definition specifies that the action should return data, it must invoke this function in response to the cb\_action() callback.
+If the action definition specifies that the action should return data, it
+must invoke this function in response to the cb_action() callback.
Keyword arguments:
* uinfo -- a user info context
-* values -- a list of \_lib.TagValue instances or None
+* values -- a list of _lib.TagValue instances or None
-### action\_set\_fd
+### action_set_fd
```python
action_set_fd(uinfo, sock) -> None
```
-Associate a worker socket with the action. This function must be called in the action cb\_init() callback.
+Associate a worker socket with the action. This function must be called in
+the action cb_init() callback.
Keyword arguments:
* uinfo -- a user info context
* sock -- a previously connected worker socket
-A typical implementation of an action cb\_init() callback looks like:
+A typical implementation of an action cb_init() callback looks like:
-```
-class ActionCallbacks(object):
- def __init__(self, workersock):
- self.workersock = workersock
+ class ActionCallbacks(object):
+ def __init__(self, workersock):
+ self.workersock = workersock
- def cb_init(self, uinfo):
- dp.action_set_fd(uinfo, self.workersock)
-```
+ def cb_init(self, uinfo):
+ dp.action_set_fd(uinfo, self.workersock)
-### action\_set\_timeout
+### action_set_timeout
```python
action_set_timeout(uinfo, timeout_secs) -> None
```
-Some action callbacks may require a significantly longer execution time than others, and this time may not even be possible to determine statically (e.g. a file download). In such cases the /confdConfig/capi/queryTimeout setting in confd.conf may be insufficient, and this function can be used to extend (or shorten) the timeout for the current callback invocation. The timeout is given in seconds from the point in time when the function is called.
+Some action callbacks may require a significantly longer execution time
+than others, and this time may not even be possible to determine statically
+(e.g. a file download). In such cases the /confdConfig/capi/queryTimeout
+setting in confd.conf may be insufficient, and this function can be used to
+extend (or shorten) the timeout for the current callback invocation. The
+timeout is given in seconds from the point in time when the function is
+called.
Keyword arguments:
* uinfo -- a user info context
-* timeout\_secs -- timeout value
+* timeout_secs -- timeout value
-### action\_seterr
+### action_seterr
```python
action_seterr(uinfo, errstr) -> None
```
-If action callback encounters fatal problems that can not be expressed via the reply function, it may call this function with an appropriate message and return CONFD\_ERR instead of CONFD\_OK.
+If action callback encounters fatal problems that can not be expressed via
+the reply function, it may call this function with an appropriate message
+and return CONFD_ERR instead of CONFD_OK.
Keyword arguments:
* uinfo -- a user info context
* errstr -- an error message string
-### action\_seterr\_extended
+### action_seterr_extended
```python
action_seterr_extended(uninfo, code, apptag_ns, apptag_tag, errstr) -> None
```
-This function can be used to provide more structured error information from an action callback.
+This function can be used to provide more structured error information
+from an action callback.
Keyword arguments:
* uinfo -- a user info context
* code -- an error code
-* apptag\_ns -- namespace - should be set to 0
-* apptag\_tag -- either 0 or the hash value for a data model node
+* apptag_ns -- namespace - should be set to 0
+* apptag_tag -- either 0 or the hash value for a data model node
* errstr -- an error message string
-### action\_seterr\_extended\_info
+### action_seterr_extended_info
```python
action_seterr_extended_info(uinfo, code, apptag_ns, apptag_tag,
error_info, errstr) -> None
```
-This function can be used to provide structured error information in the same way as action\_seterr\_extended(), and additionally provide contents for the NETCONF element.
+This function can be used to provide structured error information in the
+same way as action_seterr_extended(), and additionally provide contents for
+the NETCONF element.
Keyword arguments:
* uinfo -- a user info context
* code -- an error code
-* apptag\_ns -- namespace - should be set to 0
-* apptag\_tag -- either 0 or the hash value for a data model node
-* error\_info -- a list of \_lib.TagValue instances
+* apptag_ns -- namespace - should be set to 0
+* apptag_tag -- either 0 or the hash value for a data model node
+* error_info -- a list of _lib.TagValue instances
* errstr -- an error message string
-### auth\_seterr
+### auth_seterr
```python
auth_seterr(actx, errstr) -> None
@@ -246,25 +282,33 @@ auth_seterr(actx, errstr) -> None
This function is used by the application to set an error string.
-This function can be used to provide a text message when the callback returns CONFD\_ERR. If used when rejecting a successful authentication, the message will be logged in ConfD's audit log (otherwise a generic "rejected by application callback" message is logged).
+This function can be used to provide a text message when the callback
+returns CONFD_ERR. If used when rejecting a successful authentication, the
+message will be logged in ConfD's audit log (otherwise a generic "rejected
+by application callback" message is logged).
Keyword arguments:
* actx -- the auth context
* errstr -- an error message string
-### authorization\_set\_timeout
+### authorization_set_timeout
```python
authorization_set_timeout(actx, timeout_secs) -> None
```
-The authorization callbacks are invoked on the daemon control socket, and as such are expected to complete quickly. However in case they send requests to a remote server, and such a request needs to be retried, this function can be used to extend the timeout for the current callback invocation. The timeout is given in seconds from the point in time when the function is called.
+The authorization callbacks are invoked on the daemon control socket, and
+as such are expected to complete quickly. However in case they send requests
+to a remote server, and such a request needs to be retried, this function
+can be used to extend the timeout for the current callback invocation. The
+timeout is given in seconds from the point in time when the function is
+called.
Keyword arguments:
* actx -- the authorization context
-* timeout\_secs -- timeout value
+* timeout_secs -- timeout value
### connect
@@ -272,18 +316,19 @@ Keyword arguments:
connect(dx, sock, type, ip, port, path) -> None
```
-Connects to the ConfD daemon. The socket instance provided via the 'sock' argument must be kept alive during the lifetime of the daemon context.
+Connects to the ConfD daemon. The socket instance provided via the 'sock'
+argument must be kept alive during the lifetime of the daemon context.
Keyword arguments:
-* dx -- a daemon context acquired through a call to init\_daemon()
+* dx -- a daemon context acquired through a call to init_daemon()
* sock -- a Python socket instance
-* type -- the socket type (CONTROL\_SOCKET or WORKER\_SOCKET)
-* ip -- the ip address if socket is AF\_INET (optional)
-* port -- the port if socket is AF\_INET (optional)
-* path -- a filename if socket is AF\_UNIX (optional).
+* type -- the socket type (CONTROL_SOCKET or WORKER_SOCKET)
+* ip -- the ip address if socket is AF_INET (optional)
+* port -- the port if socket is AF_INET (optional)
+* path -- a filename if socket is AF_UNIX (optional).
-### data\_get\_list\_filter
+### data_get_list_filter
```python
data_get_list_filter(tctx) -> ListFilter
@@ -295,154 +340,170 @@ Keyword arguments:
* tctx -- a transaction context
-### data\_reply\_attrs
+### data_reply_attrs
```python
data_reply_attrs(tctx, attrs) -> None
```
-This function is used by the cb\_get\_attrs() callback to return the requested attribute values.
+This function is used by the cb_get_attrs() callback to return the
+requested attribute values.
Keyword arguments:
* tctx -- a transaction context
-* attrs -- a list of \_lib.AttrValue instances
+* attrs -- a list of _lib.AttrValue instances
-### data\_reply\_found
+### data_reply_found
```python
data_reply_found(tctx) -> None
```
-This function is used by the cb\_exists\_optional() callback to indicate to ConfD that a node does exist.
+This function is used by the cb_exists_optional() callback to indicate to
+ConfD that a node does exist.
Keyword arguments:
* tctx -- a transaction context
-### data\_reply\_next\_key
+### data_reply_next_key
```python
data_reply_next_key(tctx, keys, next) -> None
```
-This function is used by the cb\_get\_next() and cb\_find\_next() callbacks to return the next key.
+This function is used by the cb_get_next() and cb_find_next() callbacks to
+return the next key.
Keyword arguments:
* tctx -- a transaction context
-* keys -- a list of keys of \_lib.Value for a list item (se below)
-* next -- int value passed to the next invocation of cb\_get\_next() callback
+* keys -- a list of keys of _lib.Value for a list item (se below)
+* next -- int value passed to the next invocation of cb_get_next() callback
-A list may have mutiple key leafs specified in the data model. This is why the keys argument must be a list.
+A list may have mutiple key leafs specified in the data model. This is why
+the keys argument must be a list.
-### data\_reply\_next\_object\_array
+### data_reply_next_object_array
```python
data_reply_next_object_array(tctx, v, next) -> None
```
-This function is used by the optional cb\_get\_next\_object() and cb\_find\_next\_object() callbacks to return an entire object including its keys. It combines the functions of data\_reply\_next\_key() and data\_reply\_value\_array().
+This function is used by the optional cb_get_next_object() and
+cb_find_next_object() callbacks to return an entire object including its keys.
+It combines the functions of data_reply_next_key() and
+data_reply_value_array().
Keyword arguments:
* tctx -- a transaction context
-* v -- a list of \_lib.Value instances
-* next -- int value passed to the next invocation of cb\_get\_next() callback
+* v -- a list of _lib.Value instances
+* next -- int value passed to the next invocation of cb_get_next() callback
-### data\_reply\_next\_object\_arrays
+### data_reply_next_object_arrays
```python
data_reply_next_object_arrays(tctx, objs, timeout_millisecs) -> None
```
-This function is used by the optional cb\_get\_next\_object() and cb\_find\_next\_object() callbacks to return multiple objects including their keys, in \_lib.Value form.
+This function is used by the optional cb_get_next_object() and
+cb_find_next_object() callbacks to return multiple objects including their
+keys, in _lib.Value form.
Keyword arguments:
* tctx -- a transaction context
* objs -- a list of tuples or None (see below)
-* timeout\_millisecs -- timeout value for ConfD's caching of returned data
+* timeout_millisecs -- timeout value for ConfD's caching of returned data
-The format of argument objs is list(tuple(list(\_lib.Value), long)), or None to indicate end of list. Another way to indicate end of list is to include None as the first item in the 2-tuple last in the list.
+The format of argument objs is list(tuple(list(_lib.Value), long)), or
+None to indicate end of list. Another way to indicate end of list is to
+include None as the first item in the 2-tuple last in the list.
E.g.:
-```
-V = _lib.Value
-objs = [
- ( [ V(1), V(2) ], next1 ),
- ( [ V(3), V(4) ], next2 ),
- ( None, -1 )
- ]
-```
+ V = _lib.Value
+ objs = [
+ ( [ V(1), V(2) ], next1 ),
+ ( [ V(3), V(4) ], next2 ),
+ ( None, -1 )
+ ]
-### data\_reply\_next\_object\_tag\_value\_array
+### data_reply_next_object_tag_value_array
```python
data_reply_next_object_tag_value_array(tctx, tvs, next) -> None
```
-This function is used by the optional cb\_get\_next\_object() and cb\_find\_next\_object() callbacks to return an entire object including its keys
+This function is used by the optional cb_get_next_object() and
+cb_find_next_object() callbacks to return an entire object including its keys
Keyword arguments:
* tctx -- a transaction context
-* tvs -- a list of \_lib.TagValue instances or None
-* next -- int value passed to the next invocation of cb\_get\_next\_object() callback
+* tvs -- a list of _lib.TagValue instances or None
+* next -- int value passed to the next invocation of cb_get_next_object()
+ callback
-### data\_reply\_next\_object\_tag\_value\_arrays
+### data_reply_next_object_tag_value_arrays
```python
data_reply_next_object_tag_value_arrays(tctx, objs, timeout_millisecs) -> None
```
-This function is used by the optional cb\_get\_next\_object() and cb\_find\_next\_object() callbacks to return multiple objects including their keys.
+This function is used by the optional cb_get_next_object() and
+cb_find_next_object() callbacks to return multiple objects including their
+keys.
Keyword arguments:
* tctx -- a transaction context
* objs -- a list of tuples or None (see below)
-* timeout\_millisecs -- timeout value for ConfD's caching of returned data
+* timeout_millisecs -- timeout value for ConfD's caching of returned data
-The format of argument objs is list(tuple(list(\_lib.TagValue), long)) or None to indicate end of list. Another way to indicate end of list is to include None as the first item in the 2-tuple last in the list.
+The format of argument objs is list(tuple(list(_lib.TagValue), long)) or
+None to indicate end of list. Another way to indicate end of list is to
+include None as the first item in the 2-tuple last in the list.
E.g.:
-```
-objs = [
- ( [ tagval1, tagval2 ], next1 ),
- ( [ tagval3, tagval4, tagval5 ], next2 ),
- ( None, -1 )
- ]
-```
+ objs = [
+ ( [ tagval1, tagval2 ], next1 ),
+ ( [ tagval3, tagval4, tagval5 ], next2 ),
+ ( None, -1 )
+ ]
-### data\_reply\_not\_found
+### data_reply_not_found
```python
data_reply_not_found(tctx) -> None
```
-This function is used by the cb\_get\_elem() and cb\_exists\_optional() callbacks to indicate to ConfD that a list entry or node does not exist.
+This function is used by the cb_get_elem() and cb_exists_optional()
+callbacks to indicate to ConfD that a list entry or node does not exist.
Keyword arguments:
* tctx -- a transaction context
-### data\_reply\_tag\_value\_array
+### data_reply_tag_value_array
```python
data_reply_tag_value_array(tctx, tvs) -> None
```
-This function is used to return an array of values, corresponding to a complete list entry, to ConfD. It can be used by the optional cb\_get\_object() callback.
+This function is used to return an array of values, corresponding to a
+complete list entry, to ConfD. It can be used by the optional
+cb_get_object() callback.
Keyword arguments:
* tctx -- a transaction context
-* tvs -- a list of \_lib.TagValue instances or None
+* tvs -- a list of _lib.TagValue instances or None
-### data\_reply\_value
+### data_reply_value
```python
data_reply_value(tctx, v) -> None
@@ -453,48 +514,60 @@ This function is used to return a single data item to ConfD.
Keyword arguments:
* tctx -- a transaction context
-* v -- a \_lib.Value instance
+* v -- a _lib.Value instance
-### data\_reply\_value\_array
+### data_reply_value_array
```python
data_reply_value_array(tctx, vs) -> None
```
-This function is used to return an array of values, corresponding to a complete list entry, to ConfD. It can be used by the optional cb\_get\_object() callback.
+This function is used to return an array of values, corresponding to a
+complete list entry, to ConfD. It can be used by the optional
+cb_get_object() callback.
Keyword arguments:
* tctx -- a transaction context
-* vs -- a list of \_lib.Value instances
+* vs -- a list of _lib.Value instances
-### data\_set\_timeout
+### data_set_timeout
```python
data_set_timeout(tctx, timeout_secs) -> None
```
-A data callback should normally complete quickly, since e.g. the execution of a 'show' command in the CLI may require many data callback invocations. In some rare cases it may still be necessary for a data callback to have a longer execution time, and then this function can be used to extend (or shorten) the timeout for the current callback invocation. The timeout is given in seconds from the point in time when the function is called.
+A data callback should normally complete quickly, since e.g. the
+execution of a 'show' command in the CLI may require many data callback
+invocations. In some rare cases it may still be necessary for a data
+callback to have a longer execution time, and then this function can be
+used to extend (or shorten) the timeout for the current callback invocation.
+The timeout is given in seconds from the point in time when the function is
+called.
Keyword arguments:
* tctx -- a transaction context
-* timeout\_secs -- timeout value
+* timeout_secs -- timeout value
-### db\_set\_timeout
+### db_set_timeout
```python
db_set_timeout(dbx, timeout_secs) -> None
```
-Some of the DB callbacks registered via register\_db\_cb(), e.g. cb\_copy\_running\_to\_startup(), may require a longer execution time than others. This function can be used to extend the timeout for the current callback invocation. The timeout is given in seconds from the point in time when the function is called.
+Some of the DB callbacks registered via register_db_cb(), e.g.
+cb_copy_running_to_startup(), may require a longer execution time than
+others. This function can be used to extend the timeout for the current
+callback invocation. The timeout is given in seconds from the point in
+time when the function is called.
Keyword arguments:
* dbx -- a db context of DbCtxRef
-* timeout\_secs -- timeout value
+* timeout_secs -- timeout value
-### db\_seterr
+### db_seterr
```python
db_seterr(dbx, errstr) -> None
@@ -507,104 +580,115 @@ Keyword arguments:
* dbx -- a db context
* errstr -- an error message string
-### db\_seterr\_extended
+### db_seterr_extended
```python
db_seterr_extended(dbx, code, apptag_ns, apptag_tag, errstr) -> None
```
-This function can be used to provide more structured error information from a db callback.
+This function can be used to provide more structured error information
+from a db callback.
Keyword arguments:
* dbx -- a db context
* code -- an error code
-* apptag\_ns -- namespace - should be set to 0
-* apptag\_tag -- either 0 or the hash value for a data model node
+* apptag_ns -- namespace - should be set to 0
+* apptag_tag -- either 0 or the hash value for a data model node
* errstr -- an error message string
-### db\_seterr\_extended\_info
+### db_seterr_extended_info
```python
db_seterr_extended_info(dbx, code, apptag_ns, apptag_tag,
error_info, errstr) -> None
```
-This function can be used to provide structured error information in the same way as db\_seterr\_extended(), and additionally provide contents for the NETCONF element.
+This function can be used to provide structured error information in the
+same way as db_seterr_extended(), and additionally provide contents for
+the NETCONF element.
Keyword arguments:
* dbx -- a db context
* code -- an error code
-* apptag\_ns -- namespace - should be set to 0
-* apptag\_tag -- either 0 or the hash value for a data model node
-* error\_info -- a list of \_lib.TagValue instances
+* apptag_ns -- namespace - should be set to 0
+* apptag_tag -- either 0 or the hash value for a data model node
+* error_info -- a list of _lib.TagValue instances
* errstr -- an error message string
-### delayed\_reply\_error
+### delayed_reply_error
```python
delayed_reply_error(tctx, errstr) -> None
```
-This function must be used to return an error when tha actual callback returned CONFD\_DELAYED\_RESPONSE.
+This function must be used to return an error when tha actual callback
+returned CONFD_DELAYED_RESPONSE.
Keyword arguments:
* tctx -- a transaction context
* errstr -- an error message string
-### delayed\_reply\_ok
+### delayed_reply_ok
```python
delayed_reply_ok(tctx) -> None
```
-This function must be used to return the equivalent of CONFD\_OK when the actual callback returned CONFD\_DELAYED\_RESPONSE.
+This function must be used to return the equivalent of CONFD_OK when the
+actual callback returned CONFD_DELAYED_RESPONSE.
Keyword arguments:
* tctx -- a transaction context
-### delayed\_reply\_validation\_warn
+### delayed_reply_validation_warn
```python
delayed_reply_validation_warn(tctx) -> None
```
-This function must be used to return the equivalent of CONFD\_VALIDATION\_WARN when the cb\_validate() callback returned CONFD\_DELAYED\_RESPONSE.
+This function must be used to return the equivalent of CONFD_VALIDATION_WARN
+when the cb_validate() callback returned CONFD_DELAYED_RESPONSE.
Keyword arguments:
* tctx -- a transaction context
-### error\_seterr
+### error_seterr
```python
error_seterr(uinfo, errstr) -> None
```
-This function must be called by format\_error() (above) to provide a replacement for the default error message. If format\_error() is called without calling error\_seterr() the default message will be used.
+This function must be called by format_error() (above) to provide a
+ replacement for the default error message. If format_error() is called
+ without calling error_seterr() the default message will be used.
Keyword arguments:
* uinfo -- a user info context
* errstr -- an string describing the error
-### fd\_ready
+### fd_ready
```python
fd_ready(dx, sock) -> None
```
-The database application owns all data provider sockets to ConfD and is responsible for the polling of these sockets. When one of the ConfD sockets has I/O ready to read, the application must invoke fd\_ready() on the socket.
+The database application owns all data provider sockets to ConfD and is
+responsible for the polling of these sockets. When one of the ConfD
+sockets has I/O ready to read, the application must invoke fd_ready() on
+the socket.
Keyword arguments:
-* dx -- a daemon context acquired through a call to init\_daemon()
+* dx -- a daemon context acquired through a call to init_daemon()
* sock -- the socket
-### init\_daemon
+### init_daemon
```python
init_daemon(name) -> DaemonCtxRef
@@ -616,276 +700,323 @@ Keyword arguments:
* name -- a string used to uniquely identify the daemon
-### install\_crypto\_keys
+### install_crypto_keys
```python
install_crypto_keys(dtx) -> None
```
-It is possible to define AES keys inside confd.conf. These keys are used by ConfD to encrypt data which is entered into the system. The supported types are tailf:aes-cfb-128-encrypted-string and tailf:aes-256-cfb-128-encrypted-string. This function will copy those keys from ConfD (which reads confd.conf) into memory in the library.
+It is possible to define AES keys inside confd.conf. These keys
+are used by ConfD to encrypt data which is entered into the system.
+The supported types are tailf:aes-cfb-128-encrypted-string and
+tailf:aes-256-cfb-128-encrypted-string.
+This function will copy those keys from ConfD (which reads confd.conf) into
+memory in the library.
-This function must be called before register\_done() is called.
+This function must be called before register_done() is called.
Keyword arguments:
* dtx -- a daemon context wich is connected through a call to connect()
-### nano\_service\_reply\_proplist
+### nano_service_reply_proplist
```python
nano_service_reply_proplist(tctx, proplist) -> None
```
-This function must be called with the new property list, immediately prior to returning from the callback, if the stored property list should be updated. If a callback returns without calling nano\_service\_reply\_proplist(), the previous property list is retained. To completely delete the property list, call this function with the proplist argument set to an empty list or None.
+This function must be called with the new property list, immediately prior
+to returning from the callback, if the stored property list should be
+updated. If a callback returns without calling nano_service_reply_proplist(),
+the previous property list is retained. To completely delete the property
+list, call this function with the proplist argument set to an empty list or
+None.
-The proplist argument should be a list of 2-tuples built up like this: list( (name, value), (name, value), ... ) In a 2-tuple both 'name' and 'value' must be strings.
+The proplist argument should be a list of 2-tuples built up like this:
+ list( (name, value), (name, value), ... )
+In a 2-tuple both 'name' and 'value' must be strings.
Keyword arguments:
* tctx -- a transaction context
* proplist -- a list of properties or None
-### notification\_flush
+### notification_flush
```python
notification_flush(nctx) -> None
```
-Notifications are sent asynchronously, i.e. normally without blocking the caller of the send functions described above. This means that in some cases ConfD's sending of the notifications on the northbound interfaces may lag behind the send calls. This function can be used to make sure that the notifications have actually been sent out.
+Notifications are sent asynchronously, i.e. normally without blocking the
+caller of the send functions described above. This means that in some cases
+ConfD's sending of the notifications on the northbound interfaces may lag
+behind the send calls. This function can be used to make sure that the
+notifications have actually been sent out.
Keyword arguments:
-* nctx -- notification context returned from register\_notification\_stream()
+* nctx -- notification context returned from register_notification_stream()
-### notification\_replay\_complete
+### notification_replay_complete
```python
notification_replay_complete(nctx) -> None
```
-The application calls this function to notify ConfD that the replay is complete
+The application calls this function to notify ConfD that the replay is
+complete
Keyword arguments:
-* nctx -- notification context returned from register\_notification\_stream()
+* nctx -- notification context returned from register_notification_stream()
-### notification\_replay\_failed
+### notification_replay_failed
```python
notification_replay_failed(nctx) -> None
```
-In case the application fails to complete the replay as requested (e.g. the log gets overwritten while the replay is in progress), the application should call this function instead of notification\_replay\_complete(). An error message describing the reason for the failure can be supplied by first calling notification\_seterr() or notification\_seterr\_extended().
+In case the application fails to complete the replay as requested (e.g. the
+log gets overwritten while the replay is in progress), the application
+should call this function instead of notification_replay_complete(). An
+error message describing the reason for the failure can be supplied by
+first calling notification_seterr() or notification_seterr_extended().
Keyword arguments:
-* nctx -- notification context returned from register\_notification\_stream()
+* nctx -- notification context returned from register_notification_stream()
-### notification\_reply\_log\_times
+### notification_reply_log_times
```python
notification_reply_log_times(nctx, creation, aged) -> None
```
-Reply function for use in the cb\_get\_log\_times() callback invocation. If no notifications have been aged out of the log, give None for the aged argument.
+Reply function for use in the cb_get_log_times() callback invocation. If no
+notifications have been aged out of the log, give None for the aged argument.
Keyword arguments:
-* nctx -- notification context returned from register\_notification\_stream()
-* creation -- a \_lib.DateTime instance
-* aged -- a \_lib.DateTime instance or None
+* nctx -- notification context returned from register_notification_stream()
+* creation -- a _lib.DateTime instance
+* aged -- a _lib.DateTime instance or None
-### notification\_send
+### notification_send
```python
notification_send(nctx, time, values) -> None
```
-This function is called by the application to send a notification defined at the top level of a YANG module, whether "live" or replay.
+This function is called by the application to send a notification defined
+at the top level of a YANG module, whether "live" or replay.
Keyword arguments:
-* nctx -- notification context returned from register\_notification\_stream()
-* time -- a \_lib.DateTime instance
-* values -- a list of \_lib.TagValue instances or None
+* nctx -- notification context returned from register_notification_stream()
+* time -- a _lib.DateTime instance
+* values -- a list of _lib.TagValue instances or None
-### notification\_send\_path
+### notification_send_path
```python
notification_send_path(nctx, time, values, path) -> None
```
-This function is called by the application to send a notification defined as a child of a container or list in a YANG 1.1 module, whether "live" or replay.
+This function is called by the application to send a notification defined
+as a child of a container or list in a YANG 1.1 module, whether "live" or
+replay.
Keyword arguments:
-* nctx -- notification context returned from register\_notification\_stream()
-* time -- a \_lib.DateTime instance
-* values -- a list of \_lib.TagValue instances or None
+* nctx -- notification context returned from register_notification_stream()
+* time -- a _lib.DateTime instance
+* values -- a list of _lib.TagValue instances or None
* path -- path to the parent of the notification in the data tree
-### notification\_send\_snmp
+### notification_send_snmp
```python
notification_send_snmp(nctx, notification, varbinds) -> None
```
-Sends the SNMP notification specified by 'notification', without requesting inform-request delivery information. This is equivalent to calling notification\_send\_snmp\_inform() with None as the cb\_id argument. I.e. if the common arguments are the same, the two functions will send the exact same set of traps and inform-requests.
+Sends the SNMP notification specified by 'notification', without requesting
+inform-request delivery information. This is equivalent to calling
+notification_send_snmp_inform() with None as the cb_id argument. I.e. if
+the common arguments are the same, the two functions will send the exact
+same set of traps and inform-requests.
Keyword arguments:
-* nctx -- notification context returned from register\_snmp\_notification()
+* nctx -- notification context returned from register_snmp_notification()
* notification -- the notification string
-* varbinds -- a list of \_lib.SnmpVarbind instances or None
+* varbinds -- a list of _lib.SnmpVarbind instances or None
-### notification\_send\_snmp\_inform
+### notification_send_snmp_inform
```python
notification_send_snmp_inform(nctx, notification, varbinds, cb_id, ref) ->None
```
-Sends the SNMP notification specified by notification. If cb\_id is not None the callbacks registered for cb\_id will be invoked with the ref argument.
+Sends the SNMP notification specified by notification. If cb_id is not None
+the callbacks registered for cb_id will be invoked with the ref argument.
Keyword arguments:
-* nctx -- notification context returned from register\_snmp\_notification()
+* nctx -- notification context returned from register_snmp_notification()
* notification -- the notification string
-* varbinds -- a list of \_lib.SnmpVarbind instances or None
-* cb\_id -- callback id
+* varbinds -- a list of _lib.SnmpVarbind instances or None
+* cb_id -- callback id
* ref -- argument send to callbacks
-### notification\_set\_fd
+### notification_set_fd
```python
notification_set_fd(nctx, sock) -> None
```
-This function may optionally be called by the cb\_replay() callback to request that the worker socket given by 'sock' should be used for the replay. Otherwise the socket specified in register\_notification\_stream() will be used.
+This function may optionally be called by the cb_replay() callback to
+request that the worker socket given by 'sock' should be used for the
+replay. Otherwise the socket specified in register_notification_stream()
+will be used.
Keyword arguments:
-* nctx -- notification context returned from register\_notification\_stream()
+* nctx -- notification context returned from register_notification_stream()
* sock -- a previously connected worker socket
-### notification\_set\_snmp\_notify\_name
+### notification_set_snmp_notify_name
```python
notification_set_snmp_notify_name(nctx, notify_name) -> None
```
-This function can be used to change the snmpNotifyName (notify\_name) for the nctx context.
+This function can be used to change the snmpNotifyName (notify_name) for
+the nctx context.
Keyword arguments:
-* nctx -- notification context returned from register\_snmp\_notification()
-* notify\_name -- the snmpNotifyName
+* nctx -- notification context returned from register_snmp_notification()
+* notify_name -- the snmpNotifyName
-### notification\_set\_snmp\_src\_addr
+### notification_set_snmp_src_addr
```python
notification_set_snmp_src_addr(nctx, family, src_addr) -> None
```
-By default, the source address for the SNMP notifications that are sent by the above functions is chosen by the IP stack of the OS. This function may be used to select a specific source address, given by src\_addr, for the SNMP notifications subsequently sent using the nctx context. The default can be restored by calling the function with family set to AF\_UNSPEC.
+By default, the source address for the SNMP notifications that are sent by
+the above functions is chosen by the IP stack of the OS. This function may
+be used to select a specific source address, given by src_addr, for the
+SNMP notifications subsequently sent using the nctx context. The default
+can be restored by calling the function with family set to AF_UNSPEC.
Keyword arguments:
-* nctx -- notification context returned from register\_snmp\_notification()
-* family -- AF\_INET, AF\_INET6 or AF\_UNSPEC
-* src\_addr -- the source address in string format
+* nctx -- notification context returned from register_snmp_notification()
+* family -- AF_INET, AF_INET6 or AF_UNSPEC
+* src_addr -- the source address in string format
-### notification\_seterr
+### notification_seterr
```python
notification_seterr(nctx, errstr) -> None
```
-In some cases the callbacks may be unable to carry out the requested actions, e.g. the capacity for simultaneous replays might be exceeded, and they can then return CONFD\_ERR. This function allows the callback to associate an error message with the failure. It can also be used to supply an error message before calling notification\_replay\_failed().
+In some cases the callbacks may be unable to carry out the requested
+actions, e.g. the capacity for simultaneous replays might be exceeded, and
+they can then return CONFD_ERR. This function allows the callback to
+associate an error message with the failure. It can also be used to supply
+an error message before calling notification_replay_failed().
Keyword arguments:
-* nctx -- notification context returned from register\_notification\_stream()
+* nctx -- notification context returned from register_notification_stream()
* errstr -- an error message string
-### notification\_seterr\_extended
+### notification_seterr_extended
```python
notification_seterr_extended(nctx, code, apptag_ns, apptag_tag, errstr) ->None
```
-This function can be used to provide more structured error information from a notification callback.
+This function can be used to provide more structured error information
+from a notification callback.
Keyword arguments:
-* nctx -- notification context returned from register\_notification\_stream()
+* nctx -- notification context returned from register_notification_stream()
* code -- an error code
-* apptag\_ns -- namespace - should be set to 0
-* apptag\_tag -- either 0 or the hash value for a data model node
+* apptag_ns -- namespace - should be set to 0
+* apptag_tag -- either 0 or the hash value for a data model node
* errstr -- an error message string
-### notification\_seterr\_extended\_info
+### notification_seterr_extended_info
```python
notification_seterr_extended_info(nctx, code, apptag_ns, apptag_tag,
error_info, errstr) -> None
```
-This function can be used to provide structured error information in the same way as notification\_seterr\_extended(), and additionally provide contents for the NETCONF element.
+This function can be used to provide structured error information in the
+same way as notification_seterr_extended(), and additionally provide
+contents for the NETCONF element.
Keyword arguments:
-* nctx -- notification context returned from register\_notification\_stream()
+* nctx -- notification context returned from register_notification_stream()
* code -- an error code
-* apptag\_ns -- namespace - should be set to 0
-* apptag\_tag -- either 0 or the hash value for a data model node
-* error\_info -- a list of \_lib.TagValue instances
+* apptag_ns -- namespace - should be set to 0
+* apptag_tag -- either 0 or the hash value for a data model node
+* error_info -- a list of _lib.TagValue instances
* errstr -- an error message string
-### register\_action\_cbs
+### register_action_cbs
```python
register_action_cbs(dx, actionpoint, acb) -> None
```
-This function registers up to five callback functions, two of which will be called in sequence when an action is invoked.
+This function registers up to five callback functions, two of which will
+be called in sequence when an action is invoked.
Keyword arguments:
-* dx -- a daemon context acquired through a call to init\_daemon()
+* dx -- a daemon context acquired through a call to init_daemon()
* actionpoint -- the name of the action point
* vcb -- the callback instance (see below)
-The acb argument should be an instance of a class with callback methods. E.g.:
+The acb argument should be an instance of a class with callback methods.
+E.g.:
-```
-class ActionCallbacks(object):
- def cb_init(self, uinfo):
- pass
+ class ActionCallbacks(object):
+ def cb_init(self, uinfo):
+ pass
- def cb_abort(self, uinfo):
- pass
+ def cb_abort(self, uinfo):
+ pass
- def cb_action(self, uinfo, name, kp, params):
- pass
+ def cb_action(self, uinfo, name, kp, params):
+ pass
- def cb_command(self, uinfo, path, argv):
- pass
+ def cb_command(self, uinfo, path, argv):
+ pass
- def cb_completion(self, uinfo, cli_style, token, completion_char,
- kp, cmdpath, cmdparam_id, simpleType, extra):
- pass
+ def cb_completion(self, uinfo, cli_style, token, completion_char,
+ kp, cmdpath, cmdparam_id, simpleType, extra):
+ pass
-acb = ActionCallbacks()
-dp.register_action_cbs(dx, 'actionpoint-1', acb)
-```
+ acb = ActionCallbacks()
+ dp.register_action_cbs(dx, 'actionpoint-1', acb)
Notes about some of the callbacks:
-cb\_action() The params argument is a list of \_lib.TagValue instances.
+cb_action()
+ The params argument is a list of _lib.TagValue instances.
-cb\_command() The argv argument is a list of strings.
+cb_command()
+ The argv argument is a list of strings.
-### register\_auth\_cb
+### register_auth_cb
```python
register_auth_cb(dx, acb) -> None
@@ -895,21 +1026,19 @@ Registers the authentication callback.
Keyword arguments:
-* dx -- a daemon context acquired through a call to init\_daemon()
+* dx -- a daemon context acquired through a call to init_daemon()
* abc -- the callback instance (see below)
E.g.:
-```
-class AuthCallbacks(object):
- def cb_auth(self, actx):
- pass
+ class AuthCallbacks(object):
+ def cb_auth(self, actx):
+ pass
-acb = AuthCallbacks()
-dp.register_auth_cb(dx, acb)
-```
+ acb = AuthCallbacks()
+ dp.register_auth_cb(dx, acb)
-### register\_authorization\_cb
+### register_authorization_cb
```python
register_authorization_cb(dx, acb, cmd_filter, data_filter) -> None
@@ -917,26 +1046,24 @@ register_authorization_cb(dx, acb, cmd_filter, data_filter) -> None
Keyword arguments:
-* dx -- a daemon context acquired through a call to init\_daemon()
+* dx -- a daemon context acquired through a call to init_daemon()
* abc -- the callback instance (see below)
-* cmd\_filter -- set to 0 for no filtering
-* data\_filter -- set to 0 for no filtering
+* cmd_filter -- set to 0 for no filtering
+* data_filter -- set to 0 for no filtering
E.g.:
-```
-class AuthorizationCallbacks(object):
- def cb_chk_cmd_access(self, actx, cmdtokens, cmdop):
- pass
+ class AuthorizationCallbacks(object):
+ def cb_chk_cmd_access(self, actx, cmdtokens, cmdop):
+ pass
- def cb_chk_data_access(self, actx, hashed_ns, hkp, dataop, how):
- pass
+ def cb_chk_data_access(self, actx, hashed_ns, hkp, dataop, how):
+ pass
-acb = AuthCallbacks()
-dp.register_authorization_cb(dx, acb)
-```
+ acb = AuthCallbacks()
+ dp.register_authorization_cb(dx, acb)
-### register\_data\_cb
+### register_data_cb
```python
register_data_cb(dx, callpoint, data, flags) -> None
@@ -946,179 +1073,180 @@ Registers data manipulation callback functions.
Keyword arguments:
-* dx -- a daemon context acquired through a call to init\_daemon()
+* dx -- a daemon context acquired through a call to init_daemon()
* callpoint -- name of a tailf:callpoint in the data model
* data -- the callback instance (see below)
-* flags -- data callbacks flags, dp.DATA\_\* (optional)
+* flags -- data callbacks flags, dp.DATA_* (optional)
-The data argument should be an instance of a class with callback methods. E.g.:
+The data argument should be an instance of a class with callback methods.
+E.g.:
-```
-class DataCallbacks(object):
- def cb_exists_optional(self, tctx, kp):
- pass
+ class DataCallbacks(object):
+ def cb_exists_optional(self, tctx, kp):
+ pass
- def cb_get_elem(self, tctx, kp):
- pass
+ def cb_get_elem(self, tctx, kp):
+ pass
- def cb_get_next(self, tctx, kp, next):
- pass
+ def cb_get_next(self, tctx, kp, next):
+ pass
- def cb_set_elem(self, tctx, kp, newval):
- pass
+ def cb_set_elem(self, tctx, kp, newval):
+ pass
- def cb_create(self, tctx, kp):
- pass
+ def cb_create(self, tctx, kp):
+ pass
- def cb_remove(self, tctx, kp):
- pass
+ def cb_remove(self, tctx, kp):
+ pass
- def cb_find_next(self, tctx, kp, type, keys):
- pass
+ def cb_find_next(self, tctx, kp, type, keys):
+ pass
- def cb_num_instances(self, tctx, kp):
- pass
+ def cb_num_instances(self, tctx, kp):
+ pass
- def cb_get_object(self, tctx, kp):
- pass
+ def cb_get_object(self, tctx, kp):
+ pass
- def cb_get_next_object(self, tctx, kp, next):
- pass
+ def cb_get_next_object(self, tctx, kp, next):
+ pass
- def cb_find_next_object(self, tctx, kp, type, keys):
- pass
+ def cb_find_next_object(self, tctx, kp, type, keys):
+ pass
- def cb_get_case(self, tctx, kp, choice):
- pass
+ def cb_get_case(self, tctx, kp, choice):
+ pass
- def cb_set_case(self, tctx, kp, choice, caseval):
- pass
+ def cb_set_case(self, tctx, kp, choice, caseval):
+ pass
- def cb_get_attrs(self, tctx, kp, attrs):
- pass
+ def cb_get_attrs(self, tctx, kp, attrs):
+ pass
- def cb_set_attr(self, tctx, kp, attr, v):
- pass
+ def cb_set_attr(self, tctx, kp, attr, v):
+ pass
- def cb_move_after(self, tctx, kp, prevkeys):
- pass
+ def cb_move_after(self, tctx, kp, prevkeys):
+ pass
- def cb_write_all(self, tctx, kp):
- pass
+ def cb_write_all(self, tctx, kp):
+ pass
-dcb = DataCallbacks()
-dp.register_data_cb(dx, 'example-callpoint-1', dcb)
-```
+ dcb = DataCallbacks()
+ dp.register_data_cb(dx, 'example-callpoint-1', dcb)
-### register\_db\_cb
+### register_db_cb
```python
register_db_cb(dx, dbcbs) -> None
```
-This function is used to set callback functions which span over several ConfD transactions.
+This function is used to set callback functions which span over several
+ConfD transactions.
Keyword arguments:
-* dx -- a daemon context acquired through a call to init\_daemon()
+* dx -- a daemon context acquired through a call to init_daemon()
* dbcbs -- the callback instance (see below)
-The dbcbs argument should be an instance of a class with callback methods. E.g.:
+The dbcbs argument should be an instance of a class with callback methods.
+E.g.:
-```
-class DbCallbacks(object):
- def cb_candidate_commit(self, dbx, timeout):
- pass
+ class DbCallbacks(object):
+ def cb_candidate_commit(self, dbx, timeout):
+ pass
- def cb_candidate_confirming_commit(self, dbx):
- pass
+ def cb_candidate_confirming_commit(self, dbx):
+ pass
- def cb_candidate_reset(self, dbx):
- pass
+ def cb_candidate_reset(self, dbx):
+ pass
- def cb_candidate_chk_not_modified(self, dbx):
- pass
+ def cb_candidate_chk_not_modified(self, dbx):
+ pass
- def cb_candidate_rollback_running(self, dbx):
- pass
+ def cb_candidate_rollback_running(self, dbx):
+ pass
- def cb_candidate_validate(self, dbx):
- pass
+ def cb_candidate_validate(self, dbx):
+ pass
- def cb_add_checkpoint_running(self, dbx):
- pass
+ def cb_add_checkpoint_running(self, dbx):
+ pass
- def cb_del_checkpoint_running(self, dbx):
- pass
+ def cb_del_checkpoint_running(self, dbx):
+ pass
- def cb_activate_checkpoint_running(self, dbx):
- pass
+ def cb_activate_checkpoint_running(self, dbx):
+ pass
- def cb_copy_running_to_startup(self, dbx):
- pass
+ def cb_copy_running_to_startup(self, dbx):
+ pass
- def cb_running_chk_not_modified(self, dbx):
- pass
+ def cb_running_chk_not_modified(self, dbx):
+ pass
- def cb_lock(self, dbx, dbname):
- pass
+ def cb_lock(self, dbx, dbname):
+ pass
- def cb_unlock(self, dbx, dbname):
- pass
+ def cb_unlock(self, dbx, dbname):
+ pass
- def cb_lock_partial(self, dbx, dbname, lockid, paths):
- pass
+ def cb_lock_partial(self, dbx, dbname, lockid, paths):
+ pass
- def cb_ulock_partial(self, dbx, dbname, lockid):
- pass
+ def cb_ulock_partial(self, dbx, dbname, lockid):
+ pass
- def cb_delete_confid(self, dbx, dbname):
- pass
+ def cb_delete_confid(self, dbx, dbname):
+ pass
-dbcbs = DbCallbacks()
-dp.register_db_cb(dx, dbcbs)
-```
+ dbcbs = DbCallbacks()
+ dp.register_db_cb(dx, dbcbs)
-### register\_done
+### register_done
```python
register_done(dx) -> None
```
-When we have registered all the callbacks for a daemon (including the other types described below if we have them), we must call this function to synchronize with ConfD. No callbacks will be invoked until it has been called, and after the call, no further registrations are allowed.
+When we have registered all the callbacks for a daemon (including the other
+types described below if we have them), we must call this function to
+synchronize with ConfD. No callbacks will be invoked until it has been
+called, and after the call, no further registrations are allowed.
Keyword arguments:
-* dx -- a daemon context acquired through a call to init\_daemon()
+* dx -- a daemon context acquired through a call to init_daemon()
-### register\_error\_cb
+### register_error_cb
```python
register_error_cb(dx, errortypes, ecbs) -> None
```
-This funciton can be used to register error callbacks that are invoked for internally generated errors.
+This funciton can be used to register error callbacks that are
+invoked for internally generated errors.
Keyword arguments:
-* dx -- a daemon context acquired through a call to init\_daemon()
+* dx -- a daemon context acquired through a call to init_daemon()
* errortypes -- logical OR of the error types that the ecbs should handle
* ecbs -- the callback instance (see below)
E.g.:
-```
-class ErrorCallbacks(object):
- def cb_format_error(self, uinfo, errinfo_dict, default_msg):
- dp.error_seterr(uinfo, default_msg)
-ecbs = ErrorCallbacks()
-dp.register_error_cb(ctx,
- dp.ERRTYPE_BAD_VALUE |
- dp.ERRTYPE_MISC, ecbs)
-dp.register_done(ctx)
-```
+ class ErrorCallbacks(object):
+ def cb_format_error(self, uinfo, errinfo_dict, default_msg):
+ dp.error_seterr(uinfo, default_msg)
+ ecbs = ErrorCallbacks()
+ dp.register_error_cb(ctx,
+ dp.ERRTYPE_BAD_VALUE |
+ dp.ERRTYPE_MISC, ecbs)
+ dp.register_done(ctx)
-### register\_nano\_service\_cb
+### register_nano_service_cb
```python
register_nano_service_cb(dx,servicepoint,componenttype,state,nscb) -> None
@@ -1128,7 +1256,7 @@ This function registers the nano service callbacks.
Keyword arguments:
-* dx -- a daemon context acquired through a call to init\_daemon()
+* dx -- a daemon context acquired through a call to init_daemon()
* servicepoint -- name of the service point (string)
* componenttype -- name of the plan component for the nano service (string)
* state -- name of component state for the nano service (string)
@@ -1136,159 +1264,161 @@ Keyword arguments:
E.g:
-```
-class NanoServiceCallbacks(object):
- def cb_nano_create(self, tctx, root, service, plan,
- component, state, proplist, compproplist):
- pass
+ class NanoServiceCallbacks(object):
+ def cb_nano_create(self, tctx, root, service, plan,
+ component, state, proplist, compproplist):
+ pass
- def cb_nano_delete(self, tctx, root, service, plan,
- component, state, proplist, compproplist):
- pass
+ def cb_nano_delete(self, tctx, root, service, plan,
+ component, state, proplist, compproplist):
+ pass
-nscb = NanoServiceCallbacks()
-dp.register_nano_service_cb(dx, 'service-point-1', 'comp', 'state', nscb)
-```
+ nscb = NanoServiceCallbacks()
+ dp.register_nano_service_cb(dx, 'service-point-1', 'comp', 'state', nscb)
-### register\_notification\_snmp\_inform\_cb
+### register_notification_snmp_inform_cb
```python
register_notification_snmp_inform_cb(dx, cb_id, cbs) -> None
```
-If we want to receive information about the delivery of SNMP inform-requests, we must register two callbacks for this.
+If we want to receive information about the delivery of SNMP
+inform-requests, we must register two callbacks for this.
Keyword arguments:
-* dx -- a daemon context acquired through a call to init\_daemon()
-* cb\_id -- the callback identifier
+* dx -- a daemon context acquired through a call to init_daemon()
+* cb_id -- the callback identifier
* cbs -- the callback instance (see below)
E.g.:
-```
-class NotifySnmpCallbacks(object):
- def cb_targets(self, nctx, ref, targets):
- pass
+ class NotifySnmpCallbacks(object):
+ def cb_targets(self, nctx, ref, targets):
+ pass
- def cb_result(self, nctx, ref, target, got_response):
- pass
+ def cb_result(self, nctx, ref, target, got_response):
+ pass
-cbs = NotifySnmpCallbacks()
-dp.register_notification_snmp_inform_cb(dx, 'callback-id-1', cbs)
-```
+ cbs = NotifySnmpCallbacks()
+ dp.register_notification_snmp_inform_cb(dx, 'callback-id-1', cbs)
-### register\_notification\_stream
+### register_notification_stream
```python
register_notification_stream(dx, ncbs, sock, streamname) -> NotificationCtxRef
```
-This function registers the notification stream and optionally two callback functions used for the replay functionality.
+This function registers the notification stream and optionally two callback
+functions used for the replay functionality.
-The returned notification context must be used by the application for the sending of live notifications via notification\_send() or notification\_send\_path().
+The returned notification context must be used by the application for the
+sending of live notifications via notification_send() or
+notification_send_path().
Keyword arguments:
-* dx -- a daemon context acquired through a call to init\_daemon()
+* dx -- a daemon context acquired through a call to init_daemon()
* ncbs -- the callback instance (see below)
* sock -- a previously connected worker socket
* streamname -- the name of the notification stream
E.g.:
-```
-class NotificationCallbacks(object):
- def cb_get_log_times(self, nctx):
- pass
+ class NotificationCallbacks(object):
+ def cb_get_log_times(self, nctx):
+ pass
- def cb_replay(self, nctx, start, stop):
- pass
+ def cb_replay(self, nctx, start, stop):
+ pass
-ncbs = NotificationCallbacks()
-livectx = dp.register_notification_stream(dx, ncbs, workersock,
-'streamname')
-```
+ ncbs = NotificationCallbacks()
+ livectx = dp.register_notification_stream(dx, ncbs, workersock,
+ 'streamname')
-### register\_notification\_sub\_snmp\_cb
+### register_notification_sub_snmp_cb
```python
register_notification_sub_snmp_cb(dx, sub_id, cbs) -> None
```
-Registers a callback function to be called when an SNMP notification is received by the SNMP gateway.
+Registers a callback function to be called when an SNMP notification is
+received by the SNMP gateway.
Keyword arguments:
-* dx -- a daemon context acquired through a call to init\_daemon()
-* sub\_id -- the subscription id for the notifications
+* dx -- a daemon context acquired through a call to init_daemon()
+* sub_id -- the subscription id for the notifications
* cbs -- the callback instance (see below)
E.g.:
-```
-class NotifySubSnmpCallbacks(object):
- def cb_recv(self, nctx, notification, varbinds, src_addr, port):
- pass
+ class NotifySubSnmpCallbacks(object):
+ def cb_recv(self, nctx, notification, varbinds, src_addr, port):
+ pass
-cbs = NotifySubSnmpCallbacks()
-dp.register_notification_sub_snmp_cb(dx, 'sub-id-1', cbs)
-```
+ cbs = NotifySubSnmpCallbacks()
+ dp.register_notification_sub_snmp_cb(dx, 'sub-id-1', cbs)
-### register\_range\_action\_cbs
+### register_range_action_cbs
```python
register_range_action_cbs(dx, actionpoint, acb, lower, upper, path) -> None
```
-A variant of register\_action\_cbs() which registers action callbacks for a range of key values. The lower, upper, and path arguments are the same as for register\_range\_data\_cb().
+A variant of register_action_cbs() which registers action callbacks for a
+range of key values. The lower, upper, and path arguments are the same as
+for register_range_data_cb().
Keyword arguments:
-* dx -- a daemon context acquired through a call to init\_daemon()
+* dx -- a daemon context acquired through a call to init_daemon()
* actionpoint -- the name of the action point
-* data -- the callback instance (see register\_action\_cbs())
+* data -- the callback instance (see register_action_cbs())
* lower -- a list of Value's or None
* upper -- a list of Value's or None
* path -- path for the list (string)
-### register\_range\_data\_cb
+### register_range_data_cb
```python
register_range_data_cb(dx, callpoint, data, lower, upper, path,
flags) -> None
```
-This is a variant of register\_data\_cb() which registers a set of callbacks for a range of list entries.
+This is a variant of register_data_cb() which registers a set of callbacks
+for a range of list entries.
Keyword arguments:
-* dx -- a daemon context acquired through a call to init\_daemon()
+* dx -- a daemon context acquired through a call to init_daemon()
* callpoint -- name of a tailf:callpoint in the data model
-* data -- the callback instance (see register\_data\_cb())
+* data -- the callback instance (see register_data_cb())
* lower -- a list of Value's or None
* upper -- a list of Value's or None
* path -- path for the list (string)
-* flags -- data callbacks flags, dp.DATA\_\* (optional)
+* flags -- data callbacks flags, dp.DATA_* (optional)
-### register\_range\_valpoint\_cb
+### register_range_valpoint_cb
```python
register_range_valpoint_cb(dx, valpoint, vcb, lower, upper, path) -> None
```
-A variant of register\_valpoint\_cb() which registers a validation function for a range of key values. The lower, upper and path arguments are the same as for register\_range\_data\_cb().
+A variant of register_valpoint_cb() which registers a validation function
+for a range of key values. The lower, upper and path arguments are the same
+as for register_range_data_cb().
Keyword arguments:
-* dx -- a daemon context acquired through a call to init\_daemon()
+* dx -- a daemon context acquired through a call to init_daemon()
* valpoint -- name of a validation point
-* data -- the callback instance (see register\_valpoint\_cb())
+* data -- the callback instance (see register_valpoint_cb())
* lower -- a list of Value's or None
* upper -- a list of Value's or None
* path -- path for the list (string)
-### register\_service\_cb
+### register_service_cb
```python
register_service_cb(dx, servicepoint, scb) -> None
@@ -1298,43 +1428,44 @@ This function registers the service callbacks.
Keyword arguments:
-* dx -- a daemon context acquired through a call to init\_daemon()
+* dx -- a daemon context acquired through a call to init_daemon()
* servicepoint -- name of the service point (string)
* scb -- the callback instance (see below)
E.g:
-```
-class ServiceCallbacks(object):
- def cb_create(self, tctx, kp, proplist, fastmap_thandle):
- pass
+ class ServiceCallbacks(object):
+ def cb_create(self, tctx, kp, proplist, fastmap_thandle):
+ pass
- def cb_pre_modification(self, tctx, op, kp, proplist):
- pass
+ def cb_pre_modification(self, tctx, op, kp, proplist):
+ pass
- def cb_post_modification(self, tctx, op, kp, proplist):
- pass
+ def cb_post_modification(self, tctx, op, kp, proplist):
+ pass
-scb = ServiceCallbacks()
-dp.register_service_cb(dx, 'service-point-1', scb)
-```
+ scb = ServiceCallbacks()
+ dp.register_service_cb(dx, 'service-point-1', scb)
-### register\_snmp\_notification
+### register_snmp_notification
```python
register_snmp_notification(dx, sock, notify_name, ctx_name) -> NotificationCtxRef
```
-SNMP notifications can also be sent via the notification framework, however most aspects of the stream concept do not apply for SNMP. This function is used to register a worker socket, the snmpNotifyName (notify\_name), and SNMP context (ctx\_name) to be used for the notifications.
+SNMP notifications can also be sent via the notification framework, however
+most aspects of the stream concept do not apply for SNMP. This function is
+used to register a worker socket, the snmpNotifyName (notify_name), and
+SNMP context (ctx_name) to be used for the notifications.
Keyword arguments:
-* dx -- a daemon context acquired through a call to init\_daemon()
+* dx -- a daemon context acquired through a call to init_daemon()
* sock -- a previously connected worker socket
-* notify\_name -- the snmpNotifyName
-* ctx\_name -- the SNMP context
+* notify_name -- the snmpNotifyName
+* ctx_name -- the SNMP context
-### register\_trans\_cb
+### register_trans_cb
```python
register_trans_cb(dx, trans) -> None
@@ -1344,188 +1475,198 @@ Registers transaction callback functions.
Keyword arguments:
-* dx -- a daemon context acquired through a call to init\_daemon()
+* dx -- a daemon context acquired through a call to init_daemon()
* trans -- the callback instance (see below)
-The trans argument should be an instance of a class with callback methods. E.g.:
+The trans argument should be an instance of a class with callback methods.
+E.g.:
-```
-class TransCallbacks(object):
- def cb_init(self, tctx):
- pass
+ class TransCallbacks(object):
+ def cb_init(self, tctx):
+ pass
- def cb_trans_lock(self, tctx):
- pass
+ def cb_trans_lock(self, tctx):
+ pass
- def cb_trans_unlock(self, tctx):
- pass
+ def cb_trans_unlock(self, tctx):
+ pass
- def cb_write_start(self, tctx):
- pass
+ def cb_write_start(self, tctx):
+ pass
- def cb_prepare(self, tctx):
- pass
+ def cb_prepare(self, tctx):
+ pass
- def cb_abort(self, tctx):
- pass
+ def cb_abort(self, tctx):
+ pass
- def cb_commit(self, tctx):
- pass
+ def cb_commit(self, tctx):
+ pass
- def cb_finish(self, tctx):
- pass
+ def cb_finish(self, tctx):
+ pass
- def cb_interrupt(self, tctx):
- pass
+ def cb_interrupt(self, tctx):
+ pass
-tcb = TransCallbacks()
-dp.register_trans_cb(dx, tcb)
-```
+ tcb = TransCallbacks()
+ dp.register_trans_cb(dx, tcb)
-### register\_trans\_validate\_cb
+### register_trans_validate_cb
```python
register_trans_validate_cb(dx, vcbs) -> None
```
-This function installs two callback functions for the daemon context. One function that gets called when the validation phase starts in a transaction and one when the validation phase stops in a transaction.
+This function installs two callback functions for the daemon context. One
+function that gets called when the validation phase starts in a transaction
+and one when the validation phase stops in a transaction.
Keyword arguments:
-* dx -- a daemon context acquired through a call to init\_daemon()
+* dx -- a daemon context acquired through a call to init_daemon()
* vcbs -- the callback instance (see below)
-The vcbs argument should be an instance of a class with callback methods. E.g.:
+The vcbs argument should be an instance of a class with callback methods.
+E.g.:
-```
-class TransValidateCallbacks(object):
- def cb_init(self, tctx):
- pass
+ class TransValidateCallbacks(object):
+ def cb_init(self, tctx):
+ pass
- def cb_stop(self, tctx):
- pass
+ def cb_stop(self, tctx):
+ pass
-vcbs = TransValidateCallbacks()
-dp.register_trans_validate_cb(dx, vcbs)
-```
+ vcbs = TransValidateCallbacks()
+ dp.register_trans_validate_cb(dx, vcbs)
-### register\_usess\_cb
+### register_usess_cb
```python
register_usess_cb(dx, ucb) -> None
```
-This function can be used to register information callbacks that are invoked for user session start and stop.
+This function can be used to register information callbacks that are
+invoked for user session start and stop.
Keyword arguments:
-* dx -- a daemon context acquired through a call to init\_daemon()
+* dx -- a daemon context acquired through a call to init_daemon()
* ucb -- the callback instance (see below)
E.g.:
-```
-class UserSessionCallbacks(object):
- def cb_start(self, dx, uinfo):
- pass
+ class UserSessionCallbacks(object):
+ def cb_start(self, dx, uinfo):
+ pass
- def cb_stop(self, dx, uinfo):
- pass
+ def cb_stop(self, dx, uinfo):
+ pass
-ucb = UserSessionCallbacks()
-dp.register_usess_cb(dx, acb)
-```
+ ucb = UserSessionCallbacks()
+ dp.register_usess_cb(dx, acb)
-### register\_valpoint\_cb
+### register_valpoint_cb
```python
register_valpoint_cb(dx, valpoint, vcb) -> None
```
-We must also install an actual validation function for each validation point, i.e. for each tailf:validate statement in the YANG data model.
+We must also install an actual validation function for each validation
+point, i.e. for each tailf:validate statement in the YANG data model.
Keyword arguments:
-* dx -- a daemon context acquired through a call to init\_daemon()
+* dx -- a daemon context acquired through a call to init_daemon()
* valpoint -- the name of the validation point
* vcb -- the callback instance (see below)
-The vcb argument should be an instance of a class with a callback method. E.g.:
+The vcb argument should be an instance of a class with a callback method.
+E.g.:
-```
-class ValpointCallback(object):
- def cb_validate(self, tctx, kp, newval):
- pass
+ class ValpointCallback(object):
+ def cb_validate(self, tctx, kp, newval):
+ pass
-vcb = ValpointCallback()
-dp.register_valpoint_cb(dx, 'valpoint-1', vcb)
-```
+ vcb = ValpointCallback()
+ dp.register_valpoint_cb(dx, 'valpoint-1', vcb)
-### release\_daemon
+### release_daemon
```python
release_daemon(dx) -> None
```
-Releases all memory that has been allocated by init\_daemon() and other functions for the daemon context. The control socket as well as all the worker sockets must be closed by the application (before or after release\_daemon() has been called).
+Releases all memory that has been allocated by init_daemon() and other
+functions for the daemon context. The control socket as well as all the
+worker sockets must be closed by the application (before or after
+release_daemon() has been called).
Keyword arguments:
-* dx -- a daemon context acquired through a call to init\_daemon()
+* dx -- a daemon context acquired through a call to init_daemon()
-### service\_reply\_proplist
+### service_reply_proplist
```python
service_reply_proplist(tctx, proplist) -> None
```
-This function must be called with the new property list, immediately prior to returning from the callback, if the stored property list should be updated. If a callback returns without calling service\_reply\_proplist(), the previous property list is retained. To completely delete the property list, call this function with the proplist argument set to an empty list or None.
+This function must be called with the new property list, immediately prior
+to returning from the callback, if the stored property list should be
+updated. If a callback returns without calling service_reply_proplist(),
+the previous property list is retained. To completely delete the property
+list, call this function with the proplist argument set to an empty list or
+None.
-The proplist argument should be a list of 2-tuples built up like this: list( (name, value), (name, value), ... ) In a 2-tuple both 'name' and 'value' must be strings.
+The proplist argument should be a list of 2-tuples built up like this:
+ list( (name, value), (name, value), ... )
+In a 2-tuple both 'name' and 'value' must be strings.
Keyword arguments:
* tctx -- a transaction context
* proplist -- a list of properties or None
-### set\_daemon\_flags
+### set_daemon_flags
```python
set_daemon_flags(dx, flags) -> None
```
-Modifies the API behaviour according to the flags ORed into the flags argument.
+Modifies the API behaviour according to the flags ORed into the flags
+argument.
Keyword arguments:
-* dx -- a daemon context acquired through a call to init\_daemon()
+* dx -- a daemon context acquired through a call to init_daemon()
* flags -- the flags to set
-### trans\_set\_fd
+### trans_set_fd
```python
trans_set_fd(tctx, sock) -> None
```
-Associate a worker socket with the transaction, or validation phase. This function must be called in the transaction and validation cb\_init() callbacks.
+Associate a worker socket with the transaction, or validation phase. This
+function must be called in the transaction and validation cb_init()
+callbacks.
Keyword arguments:
* tctx -- a transaction context
* sock -- a previously connected worker socket
-A minimal implementation of a transaction cb\_init() callback looks like:
+A minimal implementation of a transaction cb_init() callback looks like:
-```
-class TransCb(object):
- def __init__(self, workersock):
- self.workersock = workersock
+ class TransCb(object):
+ def __init__(self, workersock):
+ self.workersock = workersock
- def cb_init(self, tctx):
- dp.trans_set_fd(tctx, self.workersock)
-```
+ def cb_init(self, tctx):
+ dp.trans_set_fd(tctx, self.workersock)
-### trans\_seterr
+### trans_seterr
```python
trans_seterr(tctx, errstr) -> None
@@ -1538,45 +1679,49 @@ Keyword arguments:
* tctx -- a transaction context
* errstr -- an error message string
-### trans\_seterr\_extended
+### trans_seterr_extended
```python
trans_seterr_extended(tctx, code, apptag_ns, apptag_tag, errstr) -> None
```
-This function can be used to provide more structured error information from a transaction or data callback.
+This function can be used to provide more structured error information
+from a transaction or data callback.
Keyword arguments:
* tctx -- a transaction context
* code -- an error code
-* apptag\_ns -- namespace - should be set to 0
-* apptag\_tag -- either 0 or the hash value for a data model node
+* apptag_ns -- namespace - should be set to 0
+* apptag_tag -- either 0 or the hash value for a data model node
* errstr -- an error message string
-### trans\_seterr\_extended\_info
+### trans_seterr_extended_info
```python
trans_seterr_extended_info(tctx, code, apptag_ns, apptag_tag,
error_info, errstr) -> None
```
-This function can be used to provide structured error information in the same way as trans\_seterr\_extended(), and additionally provide contents for the NETCONF element.
+This function can be used to provide structured error information in the
+same way as trans_seterr_extended(), and additionally provide contents for
+the NETCONF element.
Keyword arguments:
* tctx -- a transaction context
* code -- an error code
-* apptag\_ns -- namespace - should be set to 0
-* apptag\_tag -- either 0 or the hash value for a data model node
-* error\_info -- a list of \_lib.TagValue instances
+* apptag_ns -- namespace - should be set to 0
+* apptag_tag -- either 0 or the hash value for a data model node
+* error_info -- a list of _lib.TagValue instances
* errstr -- an error message string
+
## Classes
### _class_ **AuthCtxRef**
-This type represents the c-type struct confd\_auth\_ctx.
+This type represents the c-type struct confd_auth_ctx.
Available attributes:
@@ -1595,7 +1740,7 @@ _None_
### _class_ **AuthorizationCtxRef**
-This type represents the c-type struct confd\_authorization\_ctx.
+This type represents the c-type struct confd_authorization_ctx.
Available attributes:
@@ -1610,7 +1755,7 @@ _None_
### _class_ **DaemonCtxRef**
-struct confd\_daemon\_ctx references object
+struct confd_daemon_ctx references object
Members:
@@ -1618,7 +1763,7 @@ _None_
### _class_ **DbCtxRef**
-This type represents the c-type struct confd\_db\_ctx.
+This type represents the c-type struct confd_db_ctx.
DbCtxRef cannot be directly instantiated from Python.
@@ -1634,6 +1779,7 @@ Method:
did() -> int
```
+
@@ -1646,6 +1792,7 @@ Method:
dx() -> DaemonCtxRef
```
+
@@ -1658,6 +1805,7 @@ Method:
lastop() -> int
```
+
@@ -1670,6 +1818,7 @@ Method:
qref() -> int
```
+
@@ -1682,18 +1831,19 @@ Method:
uinfo() -> _ncs.UserInfo
```
+
### _class_ **ListFilter**
-This type represents the c-type struct confd\_list\_filter.
+This type represents the c-type struct confd_list_filter.
Available attributes:
-* type -- filter type, LF\_\*
+* type -- filter type, LF_*
* expr1 -- OR, AND, NOT expression
* expr2 -- OR, AND expression
-* op -- operation, CMP\_\* and EXEC\_\*
+* op -- operation, CMP_* and EXEC_*
* node -- filter tagpath
* val -- filter value
@@ -1705,12 +1855,12 @@ _None_
### _class_ **NotificationCtxRef**
-This type represents the c-type struct confd\_notification\_ctx.
+This type represents the c-type struct confd_notification_ctx.
Available attributes:
* name -- stream name or snmp notify name (string or None)
-* ctx\_name -- for snmp only (string or None)
+* ctx_name -- for snmp only (string or None)
* fd -- worker socket (int)
* dx -- the daemon context (DaemonCtxRef)
@@ -1722,17 +1872,19 @@ _None_
### _class_ **TrItemRef**
-This type represents the c-type confd\_tr\_item.
+This type represents the c-type confd_tr_item.
Available attributes:
* callpoint -- the callpoint (string)
-* op -- operation, one of C\_SET\_ELEM, C\_CREATE, C\_REMOVE, C\_SET\_CASE, C\_SET\_ATTR or C\_MOVE\_AFTER (int)
+* op -- operation, one of C_SET_ELEM, C_CREATE, C_REMOVE, C_SET_CASE,
+ C_SET_ATTR or C_MOVE_AFTER (int)
* hkp -- the keypath (HKeypathRef)
* val -- the value (Value or None)
-* choice -- the choice, only for C\_SET\_CASE (Value or None)
-* attr -- attribute, only for C\_SET\_ATTR (int or None)
-* next -- the next TrItemRef object in the linked list or None if no more items are found
+* choice -- the choice, only for C_SET_CASE (Value or None)
+* attr -- attribute, only for C_SET_ATTR (int or None)
+* next -- the next TrItemRef object in the linked list or None if no more
+ items are found
TrItemRef cannot be directly instantiated from Python.
@@ -1914,6 +2066,7 @@ MISC_APPLICATION_INTERNAL = 20
MISC_BAD_PERSIST_ID = 16
MISC_CANDIDATE_ABORT_BAD_USID = 17
MISC_CDB_OPER_UNAVAILABLE = 37
+MISC_CONF_LOAD_NOT_ALLOWED = 59
MISC_DATA_MISSING = 44
MISC_EXTERNAL = 22
MISC_EXTERNAL_TIMEOUT = 45
@@ -2094,7 +2247,6 @@ NCS_UNKNOWN_NED_IDS_COMPLIANCE_TEMPLATE = 124
NCS_UNKNOWN_NED_ID_DEVICE_TEMPLATE = 106
NCS_XML_PARSE = 11
NCS_YANGLIB_NO_SCHEMA_FOR_RUNNING = 114
-OPERATION_CASE_EXISTS = 13
PATCH_FLAG_AAA_CHECKED = 8
PATCH_FLAG_BUFFER_DAMPENED = 2
PATCH_FLAG_FILTER = 4
diff --git a/developer-reference/pyapi/_ncs.events.md b/developer-reference/pyapi/_ncs.events.md
index 2fc74f74..3ea1b937 100644
--- a/developer-reference/pyapi/_ncs.events.md
+++ b/developer-reference/pyapi/_ncs.events.md
@@ -1,27 +1,37 @@
-# \_ncs.events Module
+# Python _ncs.events Module
Low level module for subscribing to NCS event notifications.
-This module is used to connect to NCS and subscribe to certain events generated by NCS. The API to receive events from NCS is a socket based API whereby the application connects to NCS and receives events on a socket. See also the Notifications chapter in the User Guide. The program misc/notifications/confd\_notifications.c in the examples collection illustrates subscription and processing for all these events, and can also be used standalone in a development environment to monitor NCS events.
+This module is used to connect to NCS and subscribe to certain
+events generated by NCS. The API to receive events from NCS is a
+socket based API whereby the application connects to NCS and receives
+events on a socket. See also the Notifications chapter in the User Guide.
+The program misc/notifications/confd_notifications.c in the examples
+collection illustrates subscription and processing for all these events,
+and can also be used standalone in a development environment to monitor
+NCS events.
-This documentation should be read together with the [confd\_lib\_events(3)](../../resources/man/confd_lib_events.3.md) man page.
+This documentation should be read together with the [confd_lib_events(3)](../../resources/man/confd_lib_events.3.md) man page.
## Functions
-### diff\_notification\_done
+### diff_notification_done
```python
diff_notification_done(sock, tctx) -> None
```
-If the received event was NOTIF\_COMMIT\_DIFF it is important that we call this function when we are done reading the transaction diffs over MAAPI. The transaction is hanging until this function gets called. This function also releases memory associated to the transaction in the library.
+If the received event was NOTIF_COMMIT_DIFF it is important that we call
+this function when we are done reading the transaction diffs over MAAPI.
+The transaction is hanging until this function gets called. This function
+also releases memory associated to the transaction in the library.
Keyword arguments:
* sock -- a previously connected notification socket
* tctx -- a transaction context
-### notifications\_connect
+### notifications_connect
```python
notifications_connect(sock, mask, ip, port, path) -> None
@@ -33,249 +43,271 @@ Keyword arguments:
* sock -- a Python socket instance
* mask -- a bitmask of one or several notification type values
-* ip -- the ip address if socket is AF\_INET (optional)
-* port -- the port if socket is AF\_INET (optional)
-* path -- a filename if socket is AF\_UNIX (optional).
+* ip -- the ip address if socket is AF_INET (optional)
+* port -- the port if socket is AF_INET (optional)
+* path -- a filename if socket is AF_UNIX (optional).
-### notifications\_connect2
+### notifications_connect2
```python
notifications_connect2(sock, mask, data, ip, port, path) -> None
```
-This variant of notifications\_connect is required if we wish to subscribe to NOTIF\_HEARTBEAT, NOTIF\_HEALTH\_CHECK, or NOTIF\_STREAM\_EVENT events.
+This variant of notifications_connect is required if we wish to subscribe
+to NOTIF_HEARTBEAT, NOTIF_HEALTH_CHECK, or NOTIF_STREAM_EVENT events.
Keyword arguments:
* sock -- a Python socket instance
* mask -- a bitmask of one or several notification type values
-* data -- a \_events.NotificationsData instance
-* ip -- the ip address if socket is AF\_INET (optional)
-* port -- the port if socket is AF\_INET (optional)
-* path -- a filename if socket is AF\_UNIX (optional)
+* data -- a _events.NotificationsData instance
+* ip -- the ip address if socket is AF_INET (optional)
+* port -- the port if socket is AF_INET (optional)
+* path -- a filename if socket is AF_UNIX (optional)
-### read\_notification
+### read_notification
```python
read_notification(sock) -> dict
```
-The application is responsible for polling the notification socket. Once data is available to be read on the socket the application must call read\_notification() to read the data from the socket. On success a dictionary containing notification information will be returned (see below).
+The application is responsible for polling the notification socket. Once
+data is available to be read on the socket the application must call
+read_notification() to read the data from the socket. On success a
+dictionary containing notification information will be returned (see below).
Keyword arguments:
* sock -- a previously connected notification socket
-On success the returned dict will contain information corresponding to the c struct confd\_notification. The notification type is accessible through the 'type' key. The remaining information will be different depending on which type of notification this is (described below).
+On success the returned dict will contain information corresponding to the
+c struct confd_notification. The notification type is accessible through
+the 'type' key. The remaining information will be different depending on
+which type of notification this is (described below).
-Keys for type NOTIF\_AUDIT (struct confd\_audit\_notification):
+Keys for type NOTIF_AUDIT (struct confd_audit_notification):
-* logno
-* user
-* msg
-* usid
+* logno
+* user
+* msg
+* usid
-Keys for type NOTIF\_DAEMON, NOTIF\_NETCONF, NOTIF\_DEVEL, NOTIF\_JSONRPC, NOTIF\_WEBUI, or NOTIF\_TAKEOVER\_SYSLOG (struct confd\_syslog\_notification):
+Keys for type NOTIF_DAEMON, NOTIF_NETCONF, NOTIF_DEVEL, NOTIF_JSONRPC,
+NOTIF_WEBUI, or NOTIF_TAKEOVER_SYSLOG (struct confd_syslog_notification):
-* prio
-* logno
-* msg
+* prio
+* logno
+* msg
-Keys for type NOTIF\_COMMIT\_SIMPLE (struct confd\_commit\_notification):
+Keys for type NOTIF_COMMIT_SIMPLE (struct confd_commit_notification):
-* database
-* diff\_available
-* flags
-* uinfo
+* database
+* diff_available
+* flags
+* uinfo
-Keys for type NOTIF\_COMMIT\_DIFF (struct confd\_commit\_diff\_notification):
-
-* database
-* flags
-* uinfo
-* tctx
-* label (optional)
-* comment (optional)
-
-Keys for type NOTIF\_USER\_SESSION (struct confd\_user\_sess\_notification):
-
-* type
-* uinfo
-* database
-
-Keys for type NOTIF\_HA\_INFO (struct confd\_ha\_notification):
-
-* type (1)
-* noprimary - if (1) is HA\_INFO\_NOPRIMARY
-* secondary\_died - if (1) is HA\_INFO\_SECONDARY\_DIED (see below)
-* secondary\_arrived - if (1) is HA\_INFO\_SECONDARY\_ARRIVED (see below)
-* cdb\_initialized\_by\_copy - if (1) is HA\_INFO\_SECONDARY\_INITIALIZED
-* besecondary\_result - if (1) is HA\_INFO\_BESECONDARY\_RESULT
-
-If secondary\_died or secondary\_arrived is present they will in turn contain a dictionary with the following keys:
-
-* nodeid
-* af (1)
-* ip4 - if (1) is AF\_INET
-* ip6 - if (1) is AF\_INET6
-* str - if (1) if AF\_UNSPEC
-
-Keys for type NOTIF\_SUBAGENT\_INFO (struct confd\_subagent\_notification):
-
-* type
-* name
-
-Keys for type NOTIF\_COMMIT\_FAILED (struct confd\_commit\_failed\_notification):
-
-* provider (1)
-* dbname
-* port - if (1) is DP\_NETCONF
-* af (2) - if (1) is DP\_NETCONF
-* ip4 - if (2) is AF\_INET
-* ip6 - if (2) is AF\_INET6
-* daemon\_name - if (1) is DP\_EXTERNAL
-
-Keys for type NOTIF\_SNMPA (struct confd\_snmpa\_notification):
-
-* pdu\_type (1)
-* request\_id
-* error\_status
-* error\_index
-* port
-* af (2)
-* ip4 - if (3) is AF\_INET
-* ip6 - if (3) is AF\_INET6
-* vb (optional)
-* generic\_trap - if (1) is SNMPA\_PDU\_V1TRAP
-* specific\_trap - if (1) is SNMPA\_PDU\_V1TRAP
-* time\_stamp - if (1) is SNMPA\_PDU\_V1TRAP
-* enterprise - if (1) is SNMPA\_PDU\_V1TRAP (optional)
-
-Keys for type NOTIF\_FORWARD\_INFO (struct confd\_forward\_notification):
-
-* type
-* target
-* uinfo
-
-Keys for type NOTIF\_CONFIRMED\_COMMIT (struct confd\_confirmed\_commit\_notification):
-
-* type
-* timeout
-* uinfo
-
-Keys for type NOTIF\_UPGRADE\_EVENT (struct confd\_upgrade\_notification):
-
-* event
-
-Keys for type NOTIF\_COMPACTION (struct confd\_compaction\_notification):
-
-* dbfile (1) - name of the compacted file
-* type - automatic or manual
-* fsize\_start - size at start (bytes)
-* fsize\_end - size at end (bytes)
-* fsize\_last - size at end of last compaction (bytes)
-* time\_start - start time (microseconds)
-* duration - duration (microseconds)
-* ntrans - number of transactions written to (1) since last compaction
-
-Keys for type NOTIF\_COMMIT\_PROGRESS and NOTIF\_PROGRESS (struct confd\_progress\_notification):
-
-* type (1)
-* timestamp
-* duration if (1) is CONFD\_PROGRESS\_STOP
-* trace\_id (optional)
-* span\_id
-* parent\_span\_id (optional)
-* usid
-* tid
-* datastore
-* context (optional)
-* subsystem (optional)
-* msg (optional)
-* annotation (optional)
-* num\_attributes
-* attributes (optional)
-* num\_links
-* links (optional)
-
-Keys for type NOTIF\_STREAM\_EVENT (struct confd\_stream\_notification):
-
-* type (1)
-* error - if (1) is STREAM\_REPLAY\_FAILED
-* event\_time - if (1) is STREAM\_NOTIFICATION\_EVENT
-* values - if (1) is STREAM\_NOTIFICATION\_EVENT
-
-Keys for type NOTIF\_CQ\_PROGRESS (struct ncs\_cq\_progress\_notification):
-
-* type
-* timestamp
-* cq\_id
-* cq\_tag
-* label
-* completed\_devices (optional)
-* transient\_devices (optional)
-* failed\_devices (optional)
-* failed\_reasons - if failed\_devices is present
-* completed\_services (optional)
-* completed\_services\_completed\_devices - if completed\_services is present
-* failed\_services (optional)
-* failed\_services\_completed\_devices - if failed\_services is present
-* failed\_services\_failed\_devices - if failed\_services is present
-
-Keys for type NOTIF\_CALL\_HOME\_INFO (struct ncs\_call\_home\_notification):
-
-* type (1)
-* device - if (1) is CALL\_HOME\_DEVICE\_CONNECTED or CALL\_HOME\_DEVICE\_DISCONNECTED
-* af (2)
-* ip4 - if (2) is AF\_INET
-* ip6 - if (2) is AF\_INET6
-* port
-* ssh\_host\_key
-* ssh\_key\_alg
-
-### sync\_audit\_network\_notification
+Keys for type NOTIF_COMMIT_DIFF (struct confd_commit_diff_notification):
+
+* database
+* flags
+* uinfo
+* tctx
+* label (optional)
+* comment (optional)
+
+Keys for type NOTIF_USER_SESSION (struct confd_user_sess_notification):
+
+* type
+* uinfo
+* database
+
+Keys for type NOTIF_HA_INFO (struct confd_ha_notification):
+
+* type (1)
+* noprimary - if (1) is HA_INFO_NOPRIMARY
+* secondary_died - if (1) is HA_INFO_SECONDARY_DIED (see below)
+* secondary_arrived - if (1) is HA_INFO_SECONDARY_ARRIVED (see below)
+* cdb_initialized_by_copy - if (1) is HA_INFO_SECONDARY_INITIALIZED
+* besecondary_result - if (1) is HA_INFO_BESECONDARY_RESULT
+
+If secondary_died or secondary_arrived is present they will in turn contain
+a dictionary with the following keys:
+
+* nodeid
+* af (1)
+* ip4 - if (1) is AF_INET
+* ip6 - if (1) is AF_INET6
+* str - if (1) if AF_UNSPEC
+
+Keys for type NOTIF_SUBAGENT_INFO (struct confd_subagent_notification):
+
+* type
+* name
+
+Keys for type NOTIF_COMMIT_FAILED (struct confd_commit_failed_notification):
+
+* provider (1)
+* dbname
+* port - if (1) is DP_NETCONF
+* af (2) - if (1) is DP_NETCONF
+* ip4 - if (2) is AF_INET
+* ip6 - if (2) is AF_INET6
+* daemon_name - if (1) is DP_EXTERNAL
+
+Keys for type NOTIF_SNMPA (struct confd_snmpa_notification):
+
+* pdu_type (1)
+* request_id
+* error_status
+* error_index
+* port
+* af (2)
+* ip4 - if (3) is AF_INET
+* ip6 - if (3) is AF_INET6
+* vb (optional)
+* generic_trap - if (1) is SNMPA_PDU_V1TRAP
+* specific_trap - if (1) is SNMPA_PDU_V1TRAP
+* time_stamp - if (1) is SNMPA_PDU_V1TRAP
+* enterprise - if (1) is SNMPA_PDU_V1TRAP (optional)
+
+Keys for type NOTIF_FORWARD_INFO (struct confd_forward_notification):
+
+* type
+* target
+* uinfo
+
+Keys for type NOTIF_CONFIRMED_COMMIT
+ (struct confd_confirmed_commit_notification):
+
+* type
+* timeout
+* uinfo
+
+Keys for type NOTIF_UPGRADE_EVENT (struct confd_upgrade_notification):
+
+* event
+
+Keys for type NOTIF_COMPACTION (struct confd_compaction_notification):
+
+* dbfile (1) - name of the compacted file
+* type - automatic or manual
+* fsize_start - size at start (bytes)
+* fsize_end - size at end (bytes)
+* fsize_last - size at end of last compaction (bytes)
+* time_start - start time (microseconds)
+* duration - duration (microseconds)
+* ntrans - number of transactions written to (1) since last compaction
+
+Keys for type NOTIF_COMMIT_PROGRESS and NOTIF_PROGRESS
+ (struct confd_progress_notification):
+
+* type (1)
+* timestamp
+* duration if (1) is CONFD_PROGRESS_STOP
+* trace_id (optional)
+* span_id
+* parent_span_id (optional)
+* usid
+* tid
+* datastore
+* context (optional)
+* subsystem (optional)
+* msg (optional)
+* annotation (optional)
+* num_attributes
+* attributes (optional)
+* num_links
+* links (optional)
+
+Keys for type NOTIF_STREAM_EVENT (struct confd_stream_notification):
+
+* type (1)
+* error - if (1) is STREAM_REPLAY_FAILED
+* event_time - if (1) is STREAM_NOTIFICATION_EVENT
+* values - if (1) is STREAM_NOTIFICATION_EVENT
+
+Keys for type NOTIF_CQ_PROGRESS (struct ncs_cq_progress_notification):
+
+* type
+* timestamp
+* cq_id
+* cq_tag
+* label
+* completed_devices (optional)
+* transient_devices (optional)
+* failed_devices (optional)
+* failed_reasons - if failed_devices is present
+* completed_services (optional)
+* completed_services_completed_devices - if completed_services is present
+* failed_services (optional)
+* failed_services_completed_devices - if failed_services is present
+* failed_services_failed_devices - if failed_services is present
+
+Keys for type NOTIF_CALL_HOME_INFO (struct ncs_call_home_notification):
+
+* type (1)
+* device - if (1) is CALL_HOME_DEVICE_CONNECTED or
+ CALL_HOME_DEVICE_DISCONNECTED
+* af (2)
+* ip4 - if (2) is AF_INET
+* ip6 - if (2) is AF_INET6
+* port
+* ssh_host_key
+* ssh_key_alg
+
+### sync_audit_network_notification
```python
sync_audit_network_notification(sock, usid) -> None
```
-If the received event was NOTIF\_AUDIT\_NETWORK, and we are subscribing to notifications with the flag NOTIF\_AUDIT\_NETWORK\_SYNC, this function must be called when we are done processing the notification. The user session is hanging until this function gets called.
+If the received event was NOTIF_AUDIT_NETWORK, and we are subscribing to
+notifications with the flag NOTIF_AUDIT_NETWORK_SYNC, this function must be
+called when we are done processing the notification. The user session is
+hanging until this function gets called.
Keyword arguments:
* sock -- a previously connected notification socket
* usid -- the user session id
-### sync\_audit\_notification
+### sync_audit_notification
```python
sync_audit_notification(sock, usid) -> None
```
-If the received event was NOTIF\_AUDIT, and we are subscribing to notifications with the flag NOTIF\_AUDIT\_SYNC, this function must be called when we are done processing the notification. The user session is hanging until this function gets called.
+If the received event was NOTIF_AUDIT, and we are subscribing to
+notifications with the flag NOTIF_AUDIT_SYNC, this function must be called
+when we are done processing the notification. The user session is hanging
+until this function gets called.
Keyword arguments:
* sock -- a previously connected notification socket
* usid -- the user session id
-### sync\_ha\_notification
+### sync_ha_notification
```python
sync_ha_notification(sock) -> None
```
-If the received event was NOTIF\_HA\_INFO, and we are subscribing to notifications with the flag NOTIF\_HA\_INFO\_SYNC, this function must be called when we are done processing the notification. All HA processing is blocked until this function gets called.
+If the received event was NOTIF_HA_INFO, and we are subscribing to
+notifications with the flag NOTIF_HA_INFO_SYNC, this function must be
+called when we are done processing the notification. All HA processing is
+blocked until this function gets called.
Keyword arguments:
* sock -- a previously connected notification socket
+
## Classes
### _class_ **Notification**
-This is a placeholder for the c-type struct confd\_notification.
+This is a placeholder for the c-type struct confd_notification.
Notification cannot be directly instantiated from Python.
@@ -285,20 +317,22 @@ _None_
### _class_ **NotificationsData**
-This type represents the c-type struct confd\_notifications\_data.
+This type represents the c-type struct confd_notifications_data.
The contructor for this type has the following signature:
-NotificationsData(hearbeat\_interval, health\_check\_interval, stream\_name, start\_time, stop\_time, xpath\_filter, usid, verbosity) -> object
+NotificationsData(hearbeat_interval, health_check_interval, stream_name,
+ start_time, stop_time, xpath_filter, usid,
+ verbosity) -> object
Keyword arguments:
-* heartbeat\_interval -- time in milli seconds (int)
-* health\_check\_interval -- time in milli seconds (int)
-* stream\_name -- name of the notification stream (string)
-* start\_time -- the start time (Value)
-* stop\_time -- the stop time (Value)
-* xpath\_filter -- XPath filter for the stream (string) - optional
+* heartbeat_interval -- time in milli seconds (int)
+* health_check_interval -- time in milli seconds (int)
+* stream_name -- name of the notification stream (string)
+* start_time -- the start time (Value)
+* stop_time -- the stop time (Value)
+* xpath_filter -- XPath filter for the stream (string) - optional
* usid -- user session id for AAA restriction (int) - optional
* verbosity -- progress verbosity level (int) - optional
diff --git a/developer-reference/pyapi/_ncs.ha.md b/developer-reference/pyapi/_ncs.ha.md
index aede552b..4d4d60c4 100644
--- a/developer-reference/pyapi/_ncs.ha.md
+++ b/developer-reference/pyapi/_ncs.ha.md
@@ -1,10 +1,14 @@
-# \_ncs.ha Module
+# Python _ncs.ha Module
Low level module for connecting to NCS HA subsystem.
-This module is used to connect to the NCS High Availability (HA) subsystem. NCS can replicate the configuration data on several nodes in a cluster. The purpose of this API is to manage the HA functionality. The details on usage of the HA API are described in the chapter High availability in the User Guide.
+This module is used to connect to the NCS High Availability (HA)
+subsystem. NCS can replicate the configuration data on several nodes
+in a cluster. The purpose of this API is to manage the HA
+functionality. The details on usage of the HA API are described in the
+chapter High availability in the User Guide.
-This documentation should be read together with the [confd\_lib\_ha(3)](../../resources/man/confd_lib_ha.3.md) man page.
+This documentation should be read together with the [confd_lib_ha(3)](../../resources/man/confd_lib_ha.3.md) man page.
## Functions
@@ -14,7 +18,8 @@ This documentation should be read together with the [confd\_lib\_ha(3)](../../re
bemaster(sock, mynodeid) -> None
```
-This function is deprecated and will be removed. Use beprimary() instead.
+This function is deprecated and will be removed.
+Use beprimary() instead.
### benone
@@ -22,7 +27,8 @@ This function is deprecated and will be removed. Use beprimary() instead.
benone(sock) -> None
```
-Instruct a node to resume the initial state, i.e. neither become primary nor secondary.
+Instruct a node to resume the initial state, i.e. neither become primary
+nor secondary.
Keyword arguments:
@@ -47,7 +53,8 @@ Keyword arguments:
berelay(sock) -> None
```
-Instruct an established HA secondary node to be a relay for other secondary nodes.
+Instruct an established HA secondary node to be a relay for other
+secondary nodes.
Keyword arguments:
@@ -59,15 +66,22 @@ Keyword arguments:
besecondary(sock, mynodeid, primary_id, primary_ip, waitreply) -> None
```
-Instruct a NCS HA node to be a secondary node with a named primary node. If waitreply is True the function is synchronous and it will hang until the node has initialized its CDB database. This may mean that the CDB database is copied in its entirety from the primary node. If False, we do not wait for the reply, but it is possible to use a notifications socket and get notified asynchronously via a HA\_INFO\_BESECONDARY\_RESULT notification. In both cases, it is also possible to use a notifications socket and get notified asynchronously when CDB at the secondary node is initialized.
+Instruct a NCS HA node to be a secondary node with a named primary node.
+If waitreply is True the function is synchronous and it will hang until the
+node has initialized its CDB database. This may mean that the CDB database
+is copied in its entirety from the primary node. If False, we do not wait
+for the reply, but it is possible to use a notifications socket and get
+notified asynchronously via a HA_INFO_BESECONDARY_RESULT notification.
+In both cases, it is also possible to use a notifications socket and get
+notified asynchronously when CDB at the secondary node is initialized.
Keyword arguments:
-* sock -- a previously connected HA socket
-* mynodeid -- name of this secondary node (Value or string)
-* primary\_id -- name of the primary node (Value or string)
-* primary\_ip -- ip address of the primary node
-* waitreply -- synchronous or not (bool)
+* sock -- a previously connected HA socket
+* mynodeid -- name of this secondary node (Value or string)
+* primary_id -- name of the primary node (Value or string)
+* primary_ip -- ip address of the primary node
+* waitreply -- synchronous or not (bool)
### beslave
@@ -75,7 +89,8 @@ Keyword arguments:
beslave(sock, mynodeid, primary_id, primary_ip, waitreply) -> None
```
-This function is deprecated and will be removed. Use besecondary() instead.
+This function is deprecated and will be removed.
+Use besecondary() instead.
### connect
@@ -83,36 +98,42 @@ This function is deprecated and will be removed. Use besecondary() instead.
connect(sock, token, ip, port, pstr) -> None
```
-Connect a HA socket which can be used to control a NCS HA node. The token is a secret string that must be shared by all participants in the cluster. There can only be one HA socket towards NCS. A new call to ha\_connect() makes NCS close the previous connection and reset the token to the new value.
+Connect a HA socket which can be used to control a NCS HA node. The token
+is a secret string that must be shared by all participants in the cluster.
+There can only be one HA socket towards NCS. A new call to
+ha_connect() makes NCS close the previous connection and reset the token to
+the new value.
Keyword arguments:
* sock -- a Python socket instance
* token -- secret string
-* ip -- the ip address if socket is AF\_INET or AF\_INET6 (optional)
-* port -- the port if socket is AF\_INET or AF\_INET6 (optional)
-* pstr -- a filename if socket is AF\_UNIX (optional).
+* ip -- the ip address if socket is AF_INET or AF_INET6 (optional)
+* port -- the port if socket is AF_INET or AF_INET6 (optional)
+* pstr -- a filename if socket is AF_UNIX (optional).
-### secondary\_dead
+### secondary_dead
```python
secondary_dead(sock, nodeid) -> None
```
-This function must be used by the application to inform NCS HA subsystem that another node which is possibly connected to NCS is dead.
+This function must be used by the application to inform NCS HA subsystem
+that another node which is possibly connected to NCS is dead.
Keyword arguments:
* sock -- a previously connected HA socket
* nodeid -- name of the node (Value or string)
-### slave\_dead
+### slave_dead
```python
slave_dead(sock, nodeid) -> None
```
-This function is deprecated and will be removed. Use secondary\_dead() instead.
+This function is deprecated and will be removed.
+Use secondary_dead() instead.
### status
@@ -122,12 +143,15 @@ status(sock) -> None
Query a ConfD HA node for its status.
-Returns a 2-tuple of the HA status of the node in the format (State,\[list\_of\_nodes]) where 'list\_of\_nodes' is the primary/secondary(s) connected with node.
+Returns a 2-tuple of the HA status of the node in the format
+(State,[list_of_nodes]) where 'list_of_nodes' is the primary/secondary(s)
+connected with node.
Keyword arguments:
* sock -- a previously connected HA socket
+
## Predefined Values
```python
diff --git a/developer-reference/pyapi/_ncs.maapi.md b/developer-reference/pyapi/_ncs.maapi.md
index 96264589..96b321e3 100644
--- a/developer-reference/pyapi/_ncs.maapi.md
+++ b/developer-reference/pyapi/_ncs.maapi.md
@@ -1,14 +1,20 @@
-# \_ncs.maapi Module
+# Python _ncs.maapi Module
-Low level module for connecting to NCS with a read/write interface inside transactions.
+Low level module for connecting to NCS with a read/write interface
+inside transactions.
-This module is used to connect to the NCS transaction manager. The API described here has several purposes. We can use MAAPI when we wish to implement our own proprietary management agent. We also use MAAPI to attach to already existing NCS transactions, for example when we wish to implement semantic validation of configuration data in Python, and also when we wish to implement CLI wizards in Python.
+This module is used to connect to the NCS transaction manager.
+The API described here has several purposes. We can use MAAPI when we wish
+to implement our own proprietary management agent.
+We also use MAAPI to attach to already existing NCS transactions, for
+example when we wish to implement semantic validation of configuration
+data in Python, and also when we wish to implement CLI wizards in Python.
-This documentation should be read together with the [confd\_lib\_maapi(3)](../../resources/man/confd_lib_maapi.3.md) man page.
+This documentation should be read together with the [confd_lib_maapi(3)](../../resources/man/confd_lib_maapi.3.md) man page.
## Functions
-### aaa\_reload
+### aaa_reload
```python
aaa_reload(sock, synchronous) -> None
@@ -16,14 +22,18 @@ aaa_reload(sock, synchronous) -> None
Start a reload of aaa from external data provider.
-Used by external data provider to notify that there is a change to the AAA data. Calling the function with the argument 'synchronous' set to 1 or True means that the call will block until the loading is completed.
+Used by external data provider to notify that there is a change to the AAA
+data. Calling the function with the argument 'synchronous' set to 1 or True
+means that the call will block until the loading is completed.
Keyword arguments:
* sock -- a python socket instance
-* synchronous -- if 1, will wait for the loading complete and return when the loading is complete; if 0, will only initiate the loading of AAA data and return immediately
+* synchronous -- if 1, will wait for the loading complete and return when
+ the loading is complete; if 0, will only initiate the loading of AAA
+ data and return immediately
-### aaa\_reload\_path
+### aaa_reload_path
```python
aaa_reload_path(sock, synchronous, path) -> None
@@ -31,15 +41,18 @@ aaa_reload_path(sock, synchronous, path) -> None
Start a reload of aaa from external data provider.
-A variant of \_maapi\_aaa\_reload() that causes only the AAA subtree given by path to be loaded.
+A variant of _maapi_aaa_reload() that causes only the AAA subtree given by
+path to be loaded.
Keyword arguments:
* sock -- a python socket instance
-* synchronous -- if 1, will wait for the loading complete and return when the loading is complete; if 0, will only initiate the loading of AAA data and return immediately
+* synchronous -- if 1, will wait for the loading complete and return when
+ the loading is complete; if 0, will only initiate the loading of AAA
+ data and return immediately
* path -- the subtree to be loaded
-### abort\_trans
+### abort_trans
```python
abort_trans(sock, thandle) -> None
@@ -52,7 +65,7 @@ Keyword arguments:
* sock -- a python socket instance
* thandle -- transaction handle
-### abort\_upgrade
+### abort_upgrade
```python
abort_upgrade(sock) -> None
@@ -66,13 +79,15 @@ Keyword arguments:
* sock -- a python socket instance
-### apply\_template
+### apply_template
```python
apply_template(sock, thandle, template, variables, flags, rootpath) -> None
```
-Apply a template that has been loaded into NCS. The template parameter gives the name of the template. This is NOT a FASTMAP function, for that use shared\_ncs\_apply\_template instead.
+Apply a template that has been loaded into NCS. The template parameter gives
+the name of the template. This is NOT a FASTMAP function, for that use
+shared_ncs_apply_template instead.
Keyword arguments:
@@ -83,7 +98,7 @@ Keyword arguments:
* flags -- should be 0
* rootpath -- in what context to apply the template
-### apply\_trans
+### apply_trans
```python
apply_trans(sock, thandle, keepopen) -> None
@@ -91,7 +106,10 @@ apply_trans(sock, thandle, keepopen) -> None
Apply a transaction.
-Validates, prepares and eventually commits or aborts the transaction. If the validation fails and the 'keep\_open' argument is set to 1 or True, the transaction is left open and the developer can react upon the validation errors.
+Validates, prepares and eventually commits or aborts the transaction. If
+the validation fails and the 'keep_open' argument is set to 1 or True, the
+transaction is left open and the developer can react upon the validation
+errors.
Keyword arguments:
@@ -99,13 +117,13 @@ Keyword arguments:
* thandle -- transaction handle
* keepopen -- if true, transaction is not discarded if validation fails
-### apply\_trans\_flags
+### apply_trans_flags
```python
apply_trans_flags(sock, thandle, keepopen, flags) -> None
```
-A variant of apply\_trans() that takes an additional 'flags' argument.
+A variant of apply_trans() that takes an additional 'flags' argument.
Keyword arguments:
@@ -114,13 +132,13 @@ Keyword arguments:
* keepopen -- if true, transaction is not discarded if validation fails
* flags -- flags to set in the transaction
-### apply\_trans\_params
+### apply_trans_params
```python
apply_trans_params(sock, thandle, keepopen, params) -> list
```
-A variant of apply\_trans() that takes commit parameters in form of a list ofTagValue objects and returns a list of TagValue objects depending on theparameters passed in.
+A variant of apply_trans() that takes commit parameters in form of a list ofTagValue objects and returns a list of TagValue objects depending on theparameters passed in.
Keyword arguments:
@@ -140,7 +158,7 @@ Attach to a existing transaction.
Keyword arguments:
* sock -- a python socket instance
-* hashed\_ns -- the namespace to use
+* hashed_ns -- the namespace to use
* ctx -- transaction context
### attach2
@@ -149,22 +167,24 @@ Keyword arguments:
attach2(sock, hashed_ns, usid, thandle) -> None
```
-Used when there is no transaction context beforehand, to attach to a existing transaction.
+Used when there is no transaction context beforehand, to attach to a
+existing transaction.
Keyword arguments:
* sock -- a python socket instance
-* hashed\_ns -- the namespace to use
+* hashed_ns -- the namespace to use
* usid -- user session id, can be set to 0 to use the owner of the transaction
* thandle -- transaction handle
-### attach\_init
+### attach_init
```python
attach_init(sock) -> int
```
-Attach the \_MAAPI socket to the special transaction available during phase0. Returns the thandle as an integer.
+Attach the _MAAPI socket to the special transaction available during phase0.
+Returns the thandle as an integer.
Keyword arguments:
@@ -176,7 +196,13 @@ Keyword arguments:
authenticate(sock, user, password, n) -> tuple
```
-Authenticate a user session. Use the 'n' to get a list of n-1 groups that the user is a member of. Use n=1 if the function is used in a context where the group names are not needed. Returns 1 if accepted without groups. If the authentication failed or was accepted a tuple with first element status code, 0 for rejection and 1 for accepted is returned. The second element either contains the reason for the rejection as a string OR a list groupnames.
+Authenticate a user session. Use the 'n' to get a list of n-1 groups that
+the user is a member of. Use n=1 if the function is used in a context
+where the group names are not needed. Returns 1 if accepted without groups.
+If the authentication failed or was accepted a tuple with first element
+status code, 0 for rejection and 1 for accepted is returned. The second
+element either contains the reason for the rejection as a string OR a list
+groupnames.
Keyword arguments:
@@ -191,18 +217,23 @@ Keyword arguments:
authenticate2(sock, user, password, src_addr, src_port, context, prot, n) -> tuple
```
-This function does the same thing as maapi.authenticate(), but allows for passing of the additional parameters src\_addr, src\_port, context, and prot, which otherwise are passed only to maapi\_start\_user\_session()/ maapi\_start\_user\_session2(). The parameters are passed on to an external authentication executable. Keyword arguments:
+This function does the same thing as maapi.authenticate(), but allows for
+passing of the additional parameters src_addr, src_port, context, and prot,
+which otherwise are passed only to maapi_start_user_session()/
+maapi_start_user_session2(). The parameters are passed on to an external
+authentication executable.
+Keyword arguments:
* sock -- a python socket instance
* user -- username
* pass -- password
-* src\_addr -- ip address
-* src\_port -- port number
+* src_addr -- ip address
+* src_port -- port number
* context -- context for the session
* prot -- the protocol used by the client for connecting
* n -- number of groups to return
-### candidate\_abort\_commit
+### candidate_abort_commit
```python
candidate_abort_commit(sock) -> None
@@ -214,20 +245,20 @@ Keyword arguments:
* sock -- a python socket instance
-### candidate\_abort\_commit\_persistent
+### candidate_abort_commit_persistent
```python
candidate_abort_commit_persistent(sock, persist_id) -> None
```
-Cancel an ongoing confirmed commit with the cookie given by persist\_id.
+Cancel an ongoing confirmed commit with the cookie given by persist_id.
Keyword arguments:
* sock -- a python socket instance
-* persist\_id -- gives the cookie for an already ongoing persistent confirmed commit
+* persist_id -- gives the cookie for an already ongoing persistent confirmed commit
-### candidate\_commit
+### candidate_commit
```python
candidate_commit(sock) -> None
@@ -239,73 +270,83 @@ Keyword arguments:
* sock -- a python socket instance
-### candidate\_commit\_info
+### candidate_commit_info
```python
candidate_commit_info(sock, persist_id, label, comment) -> None
```
-Commit the candidate to running, or confirm an ongoing confirmed commit, and set the Label and/or Comment that is stored in the rollback file when the candidate is committed to running.
+Commit the candidate to running, or confirm an ongoing confirmed commit,
+and set the Label and/or Comment that is stored in the rollback file when
+the candidate is committed to running.
Note:
-
-> To ensure the Label and/or Comment are stored in the rollback file in all cases when doing a confirmed commit, they must be given with both, the confirmed commit (using maapi\_candidate\_confirmed\_commit\_info()) and the confirming commit (using this function).
+> To ensure the Label and/or Comment are stored in the rollback file in
+> all cases when doing a confirmed commit, they must be given with both,
+> the confirmed commit (using maapi_candidate_confirmed_commit_info())
+> and the confirming commit (using this function).
Keyword arguments:
* sock -- a python socket instance
-* persist\_id -- gives the cookie for an already ongoing persistent confirmed commit
+* persist_id -- gives the cookie for an already ongoing persistent confirmed commit
* label -- the Label
* comment -- the Comment
-### candidate\_commit\_persistent
+### candidate_commit_persistent
```python
candidate_commit_persistent(sock, persist_id) -> None
```
-Confirm an ongoing persistent commit with the cookie given by persist\_id.
+Confirm an ongoing persistent commit with the cookie given by persist_id.
Keyword arguments:
* sock -- a python socket instance
-* persist\_id -- gives the cookie for an already ongoing persistent confirmed commit
+* persist_id -- gives the cookie for an already ongoing persistent confirmed commit
-### candidate\_confirmed\_commit
+### candidate_confirmed_commit
```python
candidate_confirmed_commit(sock, timeoutsecs) -> None
```
-This function also copies the candidate into running. However if a call to maapi\_candidate\_commit() is not done within timeoutsecs an automatic rollback will occur.
+This function also copies the candidate into running. However if a call to
+maapi_candidate_commit() is not done within timeoutsecs an automatic
+rollback will occur.
Keyword arguments:
* sock -- a python socket instance
* timeoutsecs -- timeout in seconds
-### candidate\_confirmed\_commit\_info
+### candidate_confirmed_commit_info
```python
candidate_confirmed_commit_info(sock, timeoutsecs, persist, persist_id, label, comment) -> None
```
-Like candidate\_confirmed\_commit\_persistent, but also allows for setting the Label and/or Comment that is stored in the rollback file when the candidate is committed to running.
+Like candidate_confirmed_commit_persistent, but also allows for setting the
+Label and/or Comment that is stored in the rollback file when the candidate
+is committed to running.
Note:
-
-> To ensure the Label and/or Comment are stored in the rollback file in all cases when doing a confirmed commit, they must be given with both, the confirmed commit (using this function) and the confirming commit (using candidate\_commit\_info()).
+> To ensure the Label and/or Comment are stored in the rollback file in
+> all cases when doing a confirmed commit, they must be given with both,
+> the confirmed commit (using this function) and the confirming commit
+> (using candidate_commit_info()).
Keyword arguments:
* sock -- a python socket instance
* timeoutsecs -- timeout in seconds
* persist -- sets the cookie for the persistent confirmed commit
-* persist\_id -- gives the cookie for an already ongoing persistent confirmed commit
+* persist_id -- gives the cookie for an already ongoing persistent confirmed commit
* label -- the Label
* comment -- the Comment
-### candidate\_confirmed\_commit\_persistent
+### candidate_confirmed_commit_persistent
```python
candidate_confirmed_commit_persistent(sock, timeoutsecs, persist, persist_id) -> None
@@ -318,9 +359,9 @@ Keyword arguments:
* sock -- a python socket instance
* timeoutsecs -- timeout in seconds
* persist -- sets the cookie for the persistent confirmed commit
-* persist\_id -- gives the cookie for an already ongoing persistent confirmed commit
+* persist_id -- gives the cookie for an already ongoing persistent confirmed commit
-### candidate\_reset
+### candidate_reset
```python
candidate_reset(sock) -> None
@@ -332,7 +373,7 @@ Keyword arguments:
* sock -- a python socket instance
-### candidate\_validate
+### candidate_validate
```python
candidate_validate(sock) -> None
@@ -358,7 +399,7 @@ Keyword arguments:
* thandle -- transaction handle
* path -- position to change to
-### clear\_opcache
+### clear_opcache
```python
clear_opcache(sock, path) -> None
@@ -371,7 +412,7 @@ Keyword arguments:
* sock -- a python socket instance
* path -- the path to the subtree to clear
-### cli\_accounting
+### cli_accounting
```python
cli_accounting(sock, user, usid, cmdstr) -> None
@@ -385,7 +426,7 @@ Keyword arguments:
* user -- user to generate the entry for
* thandle -- transaction handle
-### cli\_cmd
+### cli_cmd
```python
cli_cmd(sock, usess, buf) -> None
@@ -399,13 +440,15 @@ Keyword arguments:
* usess -- user session
* buf -- string to write
-### cli\_cmd2
+### cli_cmd2
```python
cli_cmd2(sock, usess, buf, flags) -> None
```
-Execute CLI command in a ongoing CLI session. With flags: CMD\_NO\_FULLPATH - Do not perform the fullpath check on show commands. CMD\_NO\_HIDDEN - Allows execution of hidden CLI commands.
+Execute CLI command in a ongoing CLI session. With flags:
+CMD_NO_FULLPATH - Do not perform the fullpath check on show commands.
+CMD_NO_HIDDEN - Allows execution of hidden CLI commands.
Keyword arguments:
@@ -414,7 +457,7 @@ Keyword arguments:
* buf -- string to write
* flags -- as above
-### cli\_cmd3
+### cli_cmd3
```python
cli_cmd3(sock, usess, buf, flags, unhide) -> None
@@ -428,9 +471,10 @@ Keyword arguments:
* usess -- user session
* buf -- string to write
* flags -- as above
-* unhide -- The unhide parameter is used for passing a hide group which is unhidden during the execution of the command.
+* unhide -- The unhide parameter is used for passing a hide group which is
+ unhidden during the execution of the command.
-### cli\_cmd4
+### cli_cmd4
```python
cli_cmd4(sock, usess, buf, flags, unhide) -> None
@@ -444,15 +488,17 @@ Keyword arguments:
* usess -- user session
* buf -- string to write
* flags -- as above
-* unhide -- The unhide parameter is used for passing a hide group which is unhidden during the execution of the command.
+* unhide -- The unhide parameter is used for passing a hide group which is
+ unhidden during the execution of the command.
-### cli\_cmd\_to\_path
+### cli_cmd_to_path
```python
cli_cmd_to_path(sock, line, nsize, psize) -> tuple
```
-Returns string of the C/I namespaced CLI path that can be associated with the given command. Returns a tuple ns and path.
+Returns string of the C/I namespaced CLI path that can be associated with
+the given command. Returns a tuple ns and path.
Keyword arguments:
@@ -461,13 +507,15 @@ Keyword arguments:
* nsize -- limit length of namespace
* psize -- limit length of path
-### cli\_cmd\_to\_path2
+### cli_cmd_to_path2
```python
cli_cmd_to_path2(sock, thandle, line, nsize, psize) -> tuple
```
-Returns string of the C/I namespaced CLI path that can be associated with the given command. In the context of the provided transaction handle. Returns a tuple ns and path.
+Returns string of the C/I namespaced CLI path that can be associated with
+the given command. In the context of the provided transaction handle.
+Returns a tuple ns and path.
Keyword arguments:
@@ -477,24 +525,26 @@ Keyword arguments:
* nsize -- limit length of namespace
* psize -- limit length of path
-### cli\_diff\_cmd
+### cli_diff_cmd
```python
cli_diff_cmd(sock, thandle, thandle_old, flags, path, size) -> str
```
-Get the diff between two sessions as a series C/I cli commands. Returns a string. If no changes exist between the two sessions for the given path a \_ncs.error.Error will be thrown with the error set to ERR\_BADPATH
+Get the diff between two sessions as a series C/I cli commands. Returns a
+string. If no changes exist between the two sessions for the given path a
+_ncs.error.Error will be thrown with the error set to ERR_BADPATH
Keyword arguments:
* sock -- a python socket instance
* thandle -- transaction handle
-* thandle\_old -- transaction handle
-* flags -- as for cli\_path\_cmd
-* path -- as for cli\_path\_cmd
+* thandle_old -- transaction handle
+* flags -- as for cli_path_cmd
+* path -- as for cli_path_cmd
* size -- limit diff
-### cli\_get
+### cli_get
```python
cli_get(sock, usess, opt, size) -> str
@@ -509,13 +559,18 @@ Keyword arguments:
* opt -- option to get
* size -- maximum response size (optional, default 1024)
-### cli\_path\_cmd
+### cli_path_cmd
```python
cli_path_cmd(sock, thandle, flags, path, size) -> str
```
-Returns string of the C/I CLI command that can be associated with the given path. The flags can be given as FLAG\_EMIT\_PARENTS to enable the commands to reach the submode for the path to be emitted. The flags can be given as FLAG\_DELETE to emit the command to delete the given path. The flags can be given as FLAG\_NON\_RECURSIVE to prevent that all children to a container or list item are displayed.
+Returns string of the C/I CLI command that can be associated with the given
+path. The flags can be given as FLAG_EMIT_PARENTS to enable the commands to
+reach the submode for the path to be emitted. The flags can be given as
+FLAG_DELETE to emit the command to delete the given path. The flags can be
+given as FLAG_NON_RECURSIVE to prevent that all children to a container or
+list item are displayed.
Keyword arguments:
@@ -525,7 +580,7 @@ Keyword arguments:
* path -- the path for the cmd
* size -- limit cmd
-### cli\_prompt
+### cli_prompt
```python
cli_prompt(sock, usess, prompt, echo, size) -> str
@@ -538,10 +593,11 @@ Keyword arguments:
* sock -- a python socket instance
* usess -- user session
* prompt -- string to show the user
-* echo -- determines wether to control if the input should be echoed or not. ECHO shows the input, NOECHO does not
+* echo -- determines wether to control if the input should be echoed or not.
+ ECHO shows the input, NOECHO does not
* size -- maximum response size (optional, default 1024)
-### cli\_set
+### cli_set
```python
cli_set(sock, usess, opt, value) -> None
@@ -556,7 +612,7 @@ Keyword arguments:
* opt -- option to set
* value -- the new value of the session parameter
-### cli\_write
+### cli_write
```python
cli_write(sock, usess, buf) -> None
@@ -582,7 +638,7 @@ Keyword arguments:
* sock -- a python socket instance
-### commit\_trans
+### commit_trans
```python
commit_trans(sock, thandle) -> None
@@ -595,7 +651,7 @@ Keyword arguments:
* sock -- a python socket instance
* thandle -- transaction handle
-### commit\_upgrade
+### commit_upgrade
```python
commit_upgrade(sock) -> None
@@ -607,13 +663,15 @@ Keyword arguments:
* sock -- a python socket instance
-### confirmed\_commit\_in\_progress
+### confirmed_commit_in_progress
```python
confirmed_commit_in_progress(sock) -> int
```
-Checks whether a confirmed commit is ongoing. Returns a positive integer being the usid of confirmed commit operation in progress or 0 if no confirmed commit is in progress.
+Checks whether a confirmed commit is ongoing. Returns a positive integer
+being the usid of confirmed commit operation in progress or 0 if no
+confirmed commit is in progress.
Keyword arguments:
@@ -632,7 +690,7 @@ Keyword arguments:
* sock -- a python socket instance
* ip -- the ip address
* port -- the port
-* path -- the path if socket is AF\_UNIX (optional)
+* path -- the path if socket is AF_UNIX (optional)
### copy
@@ -645,10 +703,10 @@ Copy all data from one data store to another.
Keyword arguments:
* sock -- a python socket instance
-* from\_thandle -- transaction handle
-* to\_thandle -- transaction handle
+* from_thandle -- transaction handle
+* to_thandle -- transaction handle
-### copy\_path
+### copy_path
```python
copy_path(sock, from_thandle, to_thandle, path) -> None
@@ -659,11 +717,11 @@ Copy subtree rooted at path from one data store to another.
Keyword arguments:
* sock -- a python socket instance
-* from\_thandle -- transaction handle
-* to\_thandle -- transaction handle
+* from_thandle -- transaction handle
+* to_thandle -- transaction handle
* path -- the subtree rooted at path is copied
-### copy\_running\_to\_startup
+### copy_running_to_startup
```python
copy_running_to_startup(sock) -> None
@@ -675,7 +733,7 @@ Keyword arguments:
* sock -- a python socket instance
-### copy\_tree
+### copy_tree
```python
copy_tree(sock, thandle, frompath, topath) -> None
@@ -695,7 +753,9 @@ Keyword arguments:
create(sock, thandle, path) -> None
```
-Create a new list entry, a presence container or a leaf of type empty (unless in a union, if type empty is in a union use set\_elem instead) in the data tree.
+Create a new list entry, a presence container or a leaf of type empty
+(unless in a union, if type empty is in a union
+use set_elem instead) in the data tree.
Keyword arguments:
@@ -703,7 +763,7 @@ Keyword arguments:
* thandle -- transaction handle
* path -- path of item to create
-### cs\_node\_cd
+### cs_node_cd
```python
cs_node_cd(socket, thandle, path) -> Union[_ncs.CsNode, None]
@@ -711,7 +771,9 @@ cs_node_cd(socket, thandle, path) -> Union[_ncs.CsNode, None]
Utility function which finds the resulting CsNode given a string keypath.
-Does the same thing as \_ncs.cs\_node\_cd(), but can handle paths that are ambiguous due to traversing a mount point, by sending a request to the daemon
+Does the same thing as _ncs.cs_node_cd(), but can handle paths that are
+ambiguous due to traversing a mount point, by sending a request to the
+daemon
Keyword arguments:
@@ -719,19 +781,24 @@ Keyword arguments:
* thandle -- transaction handle
* path -- the keypath
-### cs\_node\_children
+### cs_node_children
```python
cs_node_children(sock, thandle, mount_point, path) -> List[_ncs.CsNode]
```
-Retrieve a list of the children nodes of the node given by mount\_point that are valid for path. The mount\_point node must be a mount point (i.e. mount\_point.is\_mount\_point() == True), and the path must lead to a specific instance of this node (including the final keys if mount\_point is a list node). The thandle parameter is optional, i.e. it can be given as -1 if a transaction is not available.
+Retrieve a list of the children nodes of the node given by mount_point
+that are valid for path. The mount_point node must be a mount point
+(i.e. mount_point.is_mount_point() == True), and the path must lead to
+a specific instance of this node (including the final keys if mount_point
+is a list node). The thandle parameter is optional, i.e. it can be given
+as -1 if a transaction is not available.
Keyword arguments:
* sock -- a python socket instance
* thandle -- transaction handle
-* mount\_point -- a CsNode instance
+* mount_point -- a CsNode instance
* path -- the path to the instance of the node
### delete
@@ -740,7 +807,8 @@ Keyword arguments:
delete(sock, thandle, path) -> None
```
-Delete an existing list entry, a presence container or a leaf of type empty from the data tree.
+Delete an existing list entry, a presence container or a leaf of type empty
+from the data tree.
Keyword arguments:
@@ -748,7 +816,7 @@ Keyword arguments:
* thandle -- transaction handle
* path -- path of item to delete
-### delete\_all
+### delete_all
```python
delete_all(sock, thandle, how) -> None
@@ -756,15 +824,21 @@ delete_all(sock, thandle, how) -> None
Delete all data within a transaction.
-The how argument specifies how to delete: DEL\_SAFE - Delete everything except namespaces that were exported with tailf:export none. Top-level nodes that cannot be deleted due to AAA rules are left in place (descendant nodes may be deleted if the rules allow it). DEL\_EXPORTED - As DEL\_SAFE, but AAA rules are ignored. DEL\_ALL - Delete everything, AAA rules are ignored.
+The how argument specifies how to delete:
+ DEL_SAFE - Delete everything except namespaces that were exported with
+ tailf:export none. Top-level nodes that cannot be deleted
+ due to AAA rules are left in place (descendant nodes may be
+ deleted if the rules allow it).
+ DEL_EXPORTED - As DEL_SAFE, but AAA rules are ignored.
+ DEL_ALL - Delete everything, AAA rules are ignored.
Keyword arguments:
* sock -- a python socket instance
* thandle -- transaction handle
-* how -- DEL\_SAFE, DEL\_EXPORTED or DEL\_ALL
+* how -- DEL_SAFE, DEL_EXPORTED or DEL_ALL
-### delete\_config
+### delete_config
```python
delete_config(sock, name) -> None
@@ -777,7 +851,7 @@ Keyword arguments:
* sock -- a python socket instance
* name -- name of the datastore to empty
-### destroy\_cursor
+### destroy_cursor
```python
destroy_cursor(mc) -> None
@@ -795,7 +869,7 @@ Keyword arguments:
detach(sock, ctx) -> None
```
-Detaches an attached \_MAAPI socket.
+Detaches an attached _MAAPI socket.
Keyword arguments:
@@ -808,14 +882,15 @@ Keyword arguments:
detach2(sock, thandle) -> None
```
-Detaches an attached \_MAAPI socket when we do not have a transaction context available.
+Detaches an attached _MAAPI socket when we do not have a transaction context
+available.
Keyword arguments:
* sock -- a python socket instance
* thandle -- transaction handle
-### diff\_iterate
+### diff_iterate
```python
diff_iterate(sock, thandle, iter, flags) -> None
@@ -823,49 +898,53 @@ diff_iterate(sock, thandle, iter, flags) -> None
Iterate through a transaction diff.
-For each diff in the transaction the callback function 'iter' will be called. The iter function needs to have the following signature:
+For each diff in the transaction the callback function 'iter' will be
+called. The iter function needs to have the following signature:
-```
-def iter(keypath, operation, oldvalue, newvalue)
-```
+ def iter(keypath, operation, oldvalue, newvalue)
Where arguments are:
* keypath - the affected path (HKeypathRef)
-* operation - one of MOP\_CREATED, MOP\_DELETED, MOP\_MODIFIED, MOP\_VALUE\_SET, MOP\_MOVED\_AFTER, or MOP\_ATTR\_SET
+* operation - one of MOP_CREATED, MOP_DELETED, MOP_MODIFIED, MOP_VALUE_SET,
+ MOP_MOVED_AFTER, or MOP_ATTR_SET
* oldvalue - always None
* newvalue - see below
-The 'newvalue' argument may be set for operation MOP\_VALUE\_SET and is a Value object in that case. For MOP\_MOVED\_AFTER it may be set to a list of key values identifying an entry in the list - if it's None the list entry has been moved to the beginning of the list. For MOP\_ATTR\_SET it will be set to a 2-tuple of Value's where the first Value is the attribute set and the second Value is the value the attribute was set to. If the attribute has been deleted the second value is of type C\_NOEXISTS
+The 'newvalue' argument may be set for operation MOP_VALUE_SET and is a
+Value object in that case. For MOP_MOVED_AFTER it may be set to a list of
+key values identifying an entry in the list - if it's None the list entry
+has been moved to the beginning of the list. For MOP_ATTR_SET it will be
+set to a 2-tuple of Value's where the first Value is the attribute set
+and the second Value is the value the attribute was set to. If the
+attribute has been deleted the second value is of type C_NOEXISTS
The iter function should return one of:
-* ITER\_STOP - Stop further iteration
-* ITER\_RECURSE - Recurse further down the node children
-* ITER\_CONTINUE - Ignore node children and continue with the node's siblings
+* ITER_STOP - Stop further iteration
+* ITER_RECURSE - Recurse further down the node children
+* ITER_CONTINUE - Ignore node children and continue with the node's siblings
One could also define a class implementing the call function as:
-```
-class DiffIterator(object):
- def __init__(self):
- self.count = 0
+ class DiffIterator(object):
+ def __init__(self):
+ self.count = 0
- def __call__(self, kp, op, oldv, newv):
- print('kp={0}, op={1}, oldv={2}, newv={3}'.format(
- str(kp), str(op), str(oldv), str(newv)))
- self.count += 1
- return _confd.ITER_RECURSE
-```
+ def __call__(self, kp, op, oldv, newv):
+ print('kp={0}, op={1}, oldv={2}, newv={3}'.format(
+ str(kp), str(op), str(oldv), str(newv)))
+ self.count += 1
+ return _confd.ITER_RECURSE
Keyword arguments:
* sock -- a python socket instance
* thandle -- transaction handle
* iter -- iterator function, will be called for every diff in the transaction
-* flags -- bitmask of ITER\_WANT\_ATTR and ITER\_WANT\_P\_CONTAINER
+* flags -- bitmask of ITER_WANT_ATTR and ITER_WANT_P_CONTAINER
-### disconnect\_remote
+### disconnect_remote
```python
disconnect_remote(sock, address) -> None
@@ -878,7 +957,7 @@ Keyword arguments:
* sock -- a python socket instance
* address -- ip address (string)
-### disconnect\_sockets
+### disconnect_sockets
```python
disconnect_sockets(sock, sockets) -> None
@@ -891,13 +970,15 @@ Keyword arguments:
* sock -- a python socket instance
* sockets -- list of sockets (int)
-### do\_display
+### do_display
```python
do_display(sock, thandle, path) -> int
```
-If the data model uses the YANG when or tailf:display-when statement, this function can be used to determine if the item given by 'path' should be displayed or not.
+If the data model uses the YANG when or tailf:display-when statement, this
+function can be used to determine if the item given by 'path' should
+be displayed or not.
Keyword arguments:
@@ -905,21 +986,22 @@ Keyword arguments:
* thandle -- transaction handle
* path -- path to the 'display-when' statement
-### end\_progress\_span
+### end_progress_span
```python
end_progress_span(sock, span, annotation) -> int
```
-Ends a progress span started from start\_progress\_span() or start\_progress\_span\_th().
+Ends a progress span started from start_progress_span() or
+start_progress_span_th().
Keyword arguments:
-
* sock -- a python socket instance
-* span -- span\_id (string) or dict with key 'span\_id'
-* annotation -- metadata about the event, indicating error, explains latency or shows result etc
+* span -- span_id (string) or dict with key 'span_id'
+* annotation -- metadata about the event, indicating error, explains latency
+ or shows result etc
-### end\_user\_session
+### end_user_session
```python
end_user_session(sock) -> None
@@ -945,32 +1027,34 @@ Keyword arguments:
* thandle -- transaction handle
* path -- position to check
-### find\_next
+### find_next
```python
find_next(mc, type, inkeys) -> Union[List[_ncs.Value], bool]
```
-Update the cursor mc with the key(s) for the list entry designated by the type and inkeys parameters. This function may be used to start a traversal from an arbitrary entry in a list. Keys for subsequent entries may be retrieved with the get\_next() function. When no more keys are found, False is returned.
+Update the cursor mc with the key(s) for the list entry designated by the
+type and inkeys parameters. This function may be used to start a traversal
+from an arbitrary entry in a list. Keys for subsequent entries may be
+retrieved with the get_next() function. When no more keys are found, False
+is returned.
The strategy to use is defined by type:
-```
-FIND_NEXT - The keys for the first list entry after the one
- indicated by the inkeys argument.
-FIND_SAME_OR_NEXT - If the values in the inkeys array completely
- identifies an actual existing list entry, the keys for
- this entry are requested. Otherwise the same logic as
- for FIND_NEXT above.
-```
+ FIND_NEXT - The keys for the first list entry after the one
+ indicated by the inkeys argument.
+ FIND_SAME_OR_NEXT - If the values in the inkeys array completely
+ identifies an actual existing list entry, the keys for
+ this entry are requested. Otherwise the same logic as
+ for FIND_NEXT above.
Keyword arguments:
* mc -- maapiCursor
-* type -- CONFD\_FIND\_NEXT or CONFD\_FIND\_SAME\_OR\_NEXT
+* type -- CONFD_FIND_NEXT or CONFD_FIND_SAME_OR_NEXT
* inkeys -- where to start finding
-### finish\_trans
+### finish_trans
```python
finish_trans(sock, thandle) -> None
@@ -978,14 +1062,15 @@ finish_trans(sock, thandle) -> None
Finish a transaction.
-If the transaction is implemented by an external database, this will invoke the finish() callback.
+If the transaction is implemented by an external database, this will invoke
+the finish() callback.
Keyword arguments:
* sock -- a python socket instance
* thandle -- transaction handle
-### get\_attrs
+### get_attrs
```python
get_attrs(sock, thandle, attrs, keypath) -> list
@@ -1000,7 +1085,7 @@ Keyword arguments:
* attrs -- list of type of attributes to get
* keypath -- path to choice
-### get\_authorization\_info
+### get_authorization_info
```python
get_authorization_info(sock, usessid) -> _ncs.AuthorizationInfo
@@ -1013,7 +1098,7 @@ Keyword arguments:
* sock -- a python socket instance
* usessid -- user session id
-### get\_case
+### get_case
```python
get_case(sock, thandle, choice, keypath) -> _ncs.Value
@@ -1028,7 +1113,7 @@ Keyword arguments:
* choice -- choice name
* keypath -- path to choice
-### get\_elem
+### get_elem
```python
get_elem(sock, thandle, path) -> _ncs.Value
@@ -1042,7 +1127,7 @@ Keyword arguments:
* thandle -- transaction handle
* path -- position of elem
-### get\_my\_user\_session\_id
+### get_my_user_session_id
```python
get_my_user_session_id(sock) -> int
@@ -1054,19 +1139,20 @@ Keyword arguments:
* sock -- a python socket instance
-### get\_next
+### get_next
```python
get_next(mc) -> Union[List[_ncs.Value], bool]
```
-Iterates and gets the keys for the next entry in a list. When no more keys are found, False is returned.
+Iterates and gets the keys for the next entry in a list. When no more keys
+are found, False is returned.
Keyword arguments:
* mc -- maapiCursor
-### get\_object
+### get_object
```python
get_object(sock, thandle, n, keypath) -> List[_ncs.Value]
@@ -1080,13 +1166,14 @@ Keyword arguments:
* thandle -- transaction handle
* path -- position of list entry
-### get\_objects
+### get_objects
```python
get_objects(mc, n, nobj) -> List[_ncs.Value]
```
-Read at most n values from each nobj lists starting at Cursor mc. Returns a list of Value's.
+Read at most n values from each nobj lists starting at Cursor mc.
+Returns a list of Value's.
Keyword arguments:
@@ -1094,61 +1181,87 @@ Keyword arguments:
* n -- at most n values will be read
* nobj -- number of nobj lists which n elements will be taken from
-### get\_rollback\_id
+### get_rollback_id
```python
get_rollback_id(sock, thandle) -> int
```
-Get rollback id from a committed transaction. Returns int with fixed id, where -1 indicates an error or no rollback id available.
+Get rollback id from a committed transaction. Returns int with fixed id,
+where -1 indicates an error or no rollback id available.
Keyword arguments:
* sock -- a python socket instance
* thandle -- transaction handle
-### get\_running\_db\_status
+### get_running_db_status
```python
get_running_db_status(sock) -> int
```
-If a transaction fails in the commit() phase, the configuration database is in in a possibly inconsistent state. This function queries ConfD on the consistency state. Returns 1 if the configuration is consistent and 0 otherwise.
+If a transaction fails in the commit() phase, the configuration database is
+in in a possibly inconsistent state. This function queries ConfD on the
+consistency state. Returns 1 if the configuration is consistent and 0
+otherwise.
Keyword arguments:
* sock -- a python socket instance
-### get\_schema\_file\_path
+### get_schema_file_path
```python
get_schema_file_path(sock) -> str
```
-If shared memory schema support has been enabled, this function will return the pathname of the file used for the shared memory mapping, which can then be passed to the mmap\_schemas() function>
+If shared memory schema support has been enabled, this function will
+return the pathname of the file used for the shared memory mapping,
+which can then be passed to the mmap_schemas() function>
-If creation of the schema file is in progress when the function is called, the call will block until the creation has completed.
+If creation of the schema file is in progress when the function
+is called, the call will block until the creation has completed.
Keyword arguments:
* sock -- a python socket instance
-### get\_stream\_progress
+### get_stream_progress
```python
get_stream_progress(sock, id) -> int
```
-Used in conjunction with a maapi stream to see how much data has been consumed.
+Used in conjunction with a maapi stream to see how much data has been
+consumed.
-This function allows us to limit the amount of data 'in flight' between the application and the system. The sock parameter must be the maapi socket used for a function call that required a stream socket for writing (currently the only such function is load\_config\_stream()), and the id parameter is the id returned by that function.
+This function allows us to limit the amount of data 'in flight' between the
+application and the system. The sock parameter must be the maapi socket
+used for a function call that required a stream socket for writing
+(currently the only such function is load_config_stream()), and the id
+parameter is the id returned by that function.
Keyword arguments:
* sock -- a python socket instance
-* id -- the id returned from load\_config\_stream()
+* id -- the id returned from load_config_stream()
-### get\_templates
+### get_template_variables
+
+```python
+get_template_variables(sock, template_name, type) -> list
+```
+
+Get the template variables for a specific template.
+
+Keyword arguments:
+
+* sock -- a python socket instance
+* template_name -- the name of the template
+* type -- the type of the template (int)
+
+### get_templates
```python
get_templates(sock) -> list
@@ -1160,20 +1273,35 @@ Keyword arguments:
* sock -- a python socket instance
-### get\_trans\_params
+### get_trans_mode
+
+```python
+get_trans_mode(sock, thandle, mode) -> int
+```
+
+Get the transaction mode for a transaction.
+
+Keyword arguments:
+
+* sock -- a python socket instance
+* thandle -- transaction handle
+* mode -- the mode of transaction
+
+### get_trans_params
```python
get_trans_params(sock, thandle) -> list
```
-Get the commit parameters for a transaction. The commit parameters are returned as a list of TagValue objects.
+Get the commit parameters for a transaction. The commit parameters are
+returned as a list of TagValue objects.
Keyword arguments:
* sock -- a python socket instance
* thandle -- transaction handle
-### get\_user\_session
+### get_user_session
```python
get_user_session(sock, usessid) -> _ncs.UserInfo
@@ -1186,7 +1314,7 @@ Keyword arguments:
* sock -- a python socket instance
* usessid -- session id
-### get\_user\_session\_identification
+### get_user_session_identification
```python
get_user_session_identification(sock, usessid) -> dict
@@ -1194,27 +1322,30 @@ get_user_session_identification(sock, usessid) -> dict
Get user session identification data.
-Get the user identification data related to a user session provided by the 'usessid' argument. The function returns a dict with the user identification data.
+Get the user identification data related to a user session provided by the
+'usessid' argument. The function returns a dict with the user
+identification data.
Keyword arguments:
* sock -- a python socket instance
* usessid -- user session id
-### get\_user\_session\_opaque
+### get_user_session_opaque
```python
get_user_session_opaque(sock, usessid) -> str
```
-Returns a string containing additional 'opaque' information, if additional 'opaque' information is available.
+Returns a string containing additional 'opaque' information, if additional
+'opaque' information is available.
Keyword arguments:
* sock -- a python socket instance
* usessid -- user session id
-### get\_user\_sessions
+### get_user_sessions
```python
get_user_sessions(sock) -> list
@@ -1226,7 +1357,7 @@ Keyword arguments:
* sock -- a python socket instance
-### get\_values
+### get_values
```python
get_values(sock, thandle, values, keypath) -> list
@@ -1253,7 +1384,7 @@ Keyword arguments:
* sock -- a python socket instance
* thandle -- transaction handle
-### getcwd\_kpath
+### getcwd_kpath
```python
getcwd_kpath(sock, thandle) -> _ncs.HKeypathRef
@@ -1266,37 +1397,39 @@ Keyword arguments:
* sock -- a python socket instance
* thandle -- transaction handle
-### hide\_group
+### hide_group
```python
hide_group(sock, thandle, group_name) -> None
```
-Hide all nodes belonging to a hide group in a transaction that started with flag FLAG\_HIDE\_ALL\_HIDEGROUPS.
+Hide all nodes belonging to a hide group in a transaction that started
+with flag FLAG_HIDE_ALL_HIDEGROUPS.
Keyword arguments:
* sock -- a python socket instance
* thandle -- transaction handle
-* group\_name -- the group name
+* group_name -- the group name
-### init\_cursor
+### init_cursor
```python
init_cursor(sock, thandle, path) -> maapi.Cursor
```
-Whenever we wish to iterate over the entries in a list in the data tree, we must first initialize a cursor.
+Whenever we wish to iterate over the entries in a list in the data tree, we
+must first initialize a cursor.
Keyword arguments:
* sock -- a python socket instance
* thandle -- transaction handle
* path -- position of elem
-* secondary\_index -- name of secondary index to use (optional)
-* xpath\_expr -- xpath expression used to filter results (optional)
+* secondary_index -- name of secondary index to use (optional)
+* xpath_expr -- xpath expression used to filter results (optional)
-### init\_upgrade
+### init_upgrade
```python
init_upgrade(sock, timeoutsecs, flags) -> None
@@ -1307,8 +1440,10 @@ First step in an upgrade, initializes the upgrade procedure.
Keyword arguments:
* sock -- a python socket instance
-* timeoutsecs -- maximum time to wait for user to voluntarily exit from 'configuration' mode
-* flags -- 0 or 'UPGRADE\_KILL\_ON\_TIMEOUT' (will terminate all ongoing transactions
+* timeoutsecs -- maximum time to wait for user to voluntarily exit from
+ 'configuration' mode
+* flags -- 0 or 'UPGRADE_KILL_ON_TIMEOUT' (will terminate all ongoing
+ transactions
### insert
@@ -1324,7 +1459,7 @@ Keyword arguments:
* thandle -- transaction handle
* path -- the subtree rooted at path is copied
-### install\_crypto\_keys
+### install_crypto_keys
```python
install_crypto_keys(sock) -> None
@@ -1336,7 +1471,7 @@ Keyword arguments:
* sock -- a python socket instance
-### is\_candidate\_modified
+### is_candidate_modified
```python
is_candidate_modified(sock) -> bool
@@ -1348,19 +1483,20 @@ Keyword arguments:
* sock -- a python socket instance
-### is\_lock\_set
+### is_lock_set
```python
is_lock_set(sock, name) -> int
```
-Check if db name is locked. Return the 'usid' of the user holding the lock or 0 if not locked.
+Check if db name is locked. Return the 'usid' of the user holding the lock
+or 0 if not locked.
Keyword arguments:
* sock -- a python socket instance
-### is\_running\_modified
+### is_running_modified
```python
is_running_modified(sock) -> bool
@@ -1378,39 +1514,38 @@ Keyword arguments:
iterate(sock, thandle, iter, flags, path) -> None
```
-Used to iterate over all the data in a transaction and the underlying data store as opposed to only iterate over changes like diff\_iterate.
+Used to iterate over all the data in a transaction and the underlying data
+store as opposed to only iterate over changes like diff_iterate.
Keyword arguments:
* sock -- a python socket instance
* thandle -- transaction handle
* iter -- iterator function, will be called for every diff in the transaction
-* flags -- ITER\_WANT\_ATTR or 0
+* flags -- ITER_WANT_ATTR or 0
* path -- receive only changes from this path and below
The iter callback function should have the following signature:
-```
-def my_iterator(kp, v, attr_vals)
-```
+ def my_iterator(kp, v, attr_vals)
-### keypath\_diff\_iterate
+### keypath_diff_iterate
```python
keypath_diff_iterate(sock, thandle, iter, flags, path) -> None
```
-Like diff\_iterate but takes an additional path argument.
+Like diff_iterate but takes an additional path argument.
Keyword arguments:
* sock -- a python socket instance
* thandle -- transaction handle
* iter -- iterator function, will be called for every diff in the transaction
-* flags -- bitmask of ITER\_WANT\_ATTR and ITER\_WANT\_P\_CONTAINER
+* flags -- bitmask of ITER_WANT_ATTR and ITER_WANT_P_CONTAINER
* path -- receive only changes from this path and below
-### kill\_user\_session
+### kill_user_session
```python
kill_user_session(sock, usessid) -> None
@@ -1423,21 +1558,21 @@ Keyword arguments:
* sock -- a python socket instance
* usessid -- the MAAPI session id to be killed
-### load\_config
+### load_config
```python
load_config(sock, thandle, flags, filename) -> None
```
-Loads configuration from 'filename'. The caller of the function has to indicate which format the file has by using one of the following flags:
+Loads configuration from 'filename'.
+The caller of the function has to indicate which format the file has by
+using one of the following flags:
-```
- CONFIG_XML -- XML format
- CONFIG_J -- Juniper curly bracket style
- CONFIG_C -- Cisco XR style
- CONFIG_TURBO_C -- A faster version of CONFIG_C
- CONFIG_C_IOS -- Cisco IOS style
-```
+ CONFIG_XML -- XML format
+ CONFIG_J -- Juniper curly bracket style
+ CONFIG_C -- Cisco XR style
+ CONFIG_TURBO_C -- A faster version of CONFIG_C
+ CONFIG_C_IOS -- Cisco IOS style
Keyword arguments:
@@ -1446,7 +1581,7 @@ Keyword arguments:
* flags -- as above
* filename -- to read the configuration from
-### load\_config\_cmds
+### load_config_cmds
```python
load_config_cmds(sock, thandle, flags, cmds, path) -> None
@@ -1461,34 +1596,36 @@ Keyword arguments:
* cmds -- a string of cmds
* flags -- as above
-### load\_config\_stream
+### load_config_stream
```python
load_config_stream(sock, th, flags) -> int
```
-Loads configuration from the stream socket. The th and flags parameters are the same as for load\_config(). Returns and id.
+Loads configuration from the stream socket. The th and flags parameters are
+the same as for load_config(). Returns and id.
Keyword arguments:
* sock -- a python socket instance
* thandle -- a transaction handle
-* flags -- as for load\_config()
+* flags -- as for load_config()
-### load\_config\_stream\_result
+### load_config_stream_result
```python
load_config_stream_result(sock, id) -> int
```
-We use this function to verify that the configuration we wrote on the stream socket was successfully loaded.
+We use this function to verify that the configuration we wrote on the
+stream socket was successfully loaded.
Keyword arguments:
* sock -- a python socket instance
-* id -- the id returned from load\_config\_stream()
+* id -- the id returned from load_config_stream()
-### load\_schemas
+### load_schemas
```python
load_schemas(sock) -> None
@@ -1500,7 +1637,7 @@ Keyword arguments:
* sock -- a python socket instance
-### load\_schemas\_list
+### load_schemas_list
```python
load_schemas_list(sock, flags, nshash, nsflags) -> None
@@ -1528,7 +1665,7 @@ Keyword arguments:
* sock -- a python socket instance
* name -- name of the database to lock
-### lock\_partial
+### lock_partial
```python
lock_partial(sock, name, xpaths) -> int
@@ -1547,7 +1684,8 @@ Keyword arguments:
move(sock, thandle, tokey, path) -> None
```
-Moves an existing list entry, i.e. renames the entry using the tokey parameter.
+Moves an existing list entry, i.e. renames the entry using the tokey
+parameter.
Keyword arguments:
@@ -1556,7 +1694,7 @@ Keyword arguments:
* tokey -- confdValue list
* path -- the subtree rooted at path is copied
-### move\_ordered
+### move_ordered
```python
move_ordered(sock, thandle, where, tokey, path) -> None
@@ -1572,7 +1710,7 @@ Keyword arguments:
* tokey -- confdValue list
* path -- the subtree rooted at path is copied
-### netconf\_ssh\_call\_home
+### netconf_ssh_call_home
```python
netconf_ssh_call_home(sock, host, port) -> None
@@ -1582,9 +1720,11 @@ Initiates a NETCONF SSH Call Home connection.
Keyword arguments:
-sock -- a python socket instance host -- an ipv4 addres, ipv6 address, or host name port -- the port to connect to
+sock -- a python socket instance
+host -- an ipv4 addres, ipv6 address, or host name
+port -- the port to connect to
-### netconf\_ssh\_call\_home\_opaque
+### netconf_ssh_call_home_opaque
```python
netconf_ssh_call_home_opaque(sock, host, opaque, port) -> None
@@ -1592,9 +1732,13 @@ netconf_ssh_call_home_opaque(sock, host, opaque, port) -> None
Initiates a NETCONF SSH Call Home connection.
-Keyword arguments: sock -- a python socket instance host -- an ipv4 addres, ipv6 address, or host name opaque -- opaque string passed to an external call home session port -- the port to connect to
+Keyword arguments:
+sock -- a python socket instance
+host -- an ipv4 addres, ipv6 address, or host name
+opaque -- opaque string passed to an external call home session
+port -- the port to connect to
-### num\_instances
+### num_instances
```python
num_instances(sock, thandle, path) -> int
@@ -1608,7 +1752,7 @@ Keyword arguments:
* thandle -- transaction handle
* path -- position to check
-### perform\_upgrade
+### perform_upgrade
```python
perform_upgrade(sock, loadpathdirs) -> None
@@ -1634,7 +1778,7 @@ Keyword arguments:
* sock -- a python socket instance
* thandle -- transaction handle
-### prepare\_trans
+### prepare_trans
```python
prepare_trans(sock, thandle) -> None
@@ -1647,7 +1791,7 @@ Keyword arguments:
* sock -- a python socket instance
* thandle -- transaction handle
-### prepare\_trans\_flags
+### prepare_trans_flags
```python
prepare_trans_flags(sock, thandle, flags) -> None
@@ -1661,13 +1805,14 @@ Keyword arguments:
* thandle -- transaction handle
* flags -- flags to set in the transaction
-### prio\_message
+### prio_message
```python
prio_message(sock, to, message) -> None
```
-Like sys\_message but will be output directly instead of delivered when the receiver terminates any ongoing command.
+Like sys_message but will be output directly instead of delivered when the
+receiver terminates any ongoing command.
Keyword arguments:
@@ -1675,40 +1820,50 @@ Keyword arguments:
* to -- user to send message to or 'all' to send to all users
* message -- the message
-### progress\_info
+### progress_info
```python
progress_info(sock, msg, verbosity, attrs, links, path) -> None
```
-While spans represents a pair of data points: start and stop; info events are instead singular events, one point in time. Call progress\_info() to write a progress span info event to the progress trace. The info event will have the same span-id as the start and stop events of the currently ongoing progress span in the active user session or transaction. See start\_progress\_span() for more information.
+While spans represents a pair of data points: start and stop; info events
+are instead singular events, one point in time. Call progress_info() to
+write a progress span info event to the progress trace. The info event
+will have the same span-id as the start and stop events of the currently
+ongoing progress span in the active user session or transaction. See
+start_progress_span() for more information.
Keyword arguments:
* sock -- a python socket instance
* msg -- message to report
-* verbosity -- VERBOSITY\_\*, default: VERBOSITY\_NORMAL (optional)
+* verbosity -- VERBOSITY_*, default: VERBOSITY_NORMAL (optional)
* attrs -- user defined attributes (dict)
-* links -- to existing traces or spans \[{'trace\_id':'...', 'span\_id':'...'}]
+* links -- to existing traces or spans [{'trace_id':'...', 'span_id':'...'}]
* path -- keypath to an action/leaf/service
-### progress\_info\_th
+### progress_info_th
```python
progress_info_th(sock, thandle, msg, verbosity, attrs, links, path) ->
None
```
-While spans represents a pair of data points: start and stop; info events are instead singular events, one point in time. Call progress\_info() to write a progress span info event to the progress trace. The info event will have the same span-id as the start and stop events of the currently ongoing progress span in the active user session or transaction. See start\_progress\_span() for more information.
+While spans represents a pair of data points: start and stop; info events
+are instead singular events, one point in time. Call progress_info() to
+write a progress span info event to the progress trace. The info event
+will have the same span-id as the start and stop events of the currently
+ongoing progress span in the active user session or transaction. See
+start_progress_span() for more information.
Keyword arguments:
* sock -- a python socket instance
* thandle -- transaction handle
* msg -- message to report
-* verbosity -- VERBOSITY\_\*, default: VERBOSITY\_NORMAL (optional)
+* verbosity -- VERBOSITY_*, default: VERBOSITY_NORMAL (optional)
* attrs -- user defined attributes (dict)
-* links -- to existing traces or spans \[{'trace\_id':'...', 'span\_id':'...'}]
+* links -- to existing traces or spans [{'trace_id':'...', 'span_id':'...'}]
* path -- keypath to an action/leaf/service
### pushd
@@ -1717,7 +1872,8 @@ Keyword arguments:
pushd(sock, thandle, path) -> None
```
-Like cd, but saves the previous position in the tree. This can later be used by popd to return.
+Like cd, but saves the previous position in the tree. This can later be used
+by popd to return.
Keyword arguments:
@@ -1725,19 +1881,19 @@ Keyword arguments:
* thandle -- transaction handle
* path -- position to change to
-### query\_free\_result
+### query_free_result
```python
query_free_result(qrs) -> None
```
-Deallocates the struct returned by 'query\_result()'.
+Deallocates the struct returned by 'query_result()'.
Keyword arguments:
* qrs -- the query result structure to free
-### query\_reset
+### query_reset
```python
query_reset(sock, qh) -> None
@@ -1750,7 +1906,7 @@ Keyword arguments:
* sock -- a python socket instance
* qh -- query handle
-### query\_reset\_to
+### query_reset_to
```python
query_reset_to(sock, qh, offset) -> None
@@ -1764,20 +1920,21 @@ Keyword arguments:
* qh -- query handle
* offset -- offset counted from the beginning
-### query\_result
+### query_result
```python
query_result(sock, qh) -> _ncs.QueryResult
```
-Fetches the next available chunk of results associated with query handle qh.
+Fetches the next available chunk of results associated with query handle
+qh.
Keyword arguments:
* sock -- a python socket instance
* qh -- query handle
-### query\_result\_count
+### query_result_count
```python
query_result_count(sock, qh) -> int
@@ -1790,28 +1947,32 @@ Keyword arguments:
* sock -- a python socket instance
* qh -- query handle
-### query\_start
+### query_start
```python
query_start(sock, thandle, expr, context_node, chunk_size, initial_offset,
result_as, select, sort) -> int
```
-Starts a new query attached to the transaction given in 'th'. Returns a query handle.
+Starts a new query attached to the transaction given in 'th'.
+Returns a query handle.
Keyword arguments:
* sock -- a python socket instance
* thandle -- transaction handle
* expr -- the XPath Path expression to evaluate
-* context\_node -- The context node (an ikeypath) for the primary expression, or None (which means that the context node will be /).
-* chunk\_size -- How many results to return at a time. If set to 0, a default number will be used.
-* initial\_offset -- Which result in line to begin with (1 means to start from the beginning).
-* result\_as -- The format the results will be returned in.
+* context_node -- The context node (an ikeypath) for the primary expression,
+ or None (which means that the context node will be /).
+* chunk_size -- How many results to return at a time. If set to 0,
+ a default number will be used.
+* initial_offset -- Which result in line to begin with (1 means to start
+ from the beginning).
+* result_as -- The format the results will be returned in.
* select -- An array of XPath 'select' expressions.
* sort -- An array of XPath expressions which will be used for sorting
-### query\_stop
+### query_stop
```python
query_stop(sock, qh) -> None
@@ -1824,28 +1985,27 @@ Keyword arguments:
* sock -- a python socket instance
* qh -- query handle
-### rebind\_listener
+### rebind_listener
```python
rebind_listener(sock, listener) -> None
```
-Request that the subsystems specified by 'listeners' rebinds its listener socket(s).
+Request that the subsystems specified by 'listeners' rebinds its listener
+socket(s).
Keyword arguments:
* sock -- a python socket instance
-* listener -- One of the following parameters (ORed together if more than one)
+* listener -- One of the following parameters (ORed together if more than one)
- ```
- LISTENER_IPC
- LISTENER_NETCONF
- LISTENER_SNMP
- LISTENER_CLI
- LISTENER_WEBUI
- ```
+ LISTENER_IPC
+ LISTENER_NETCONF
+ LISTENER_SNMP
+ LISTENER_CLI
+ LISTENER_WEBUI
-### reload\_config
+### reload_config
```python
reload_config(sock) -> None
@@ -1857,7 +2017,7 @@ Keyword arguments:
* sock -- a python socket instance
-### reopen\_logs
+### reopen_logs
```python
reopen_logs(sock) -> None
@@ -1869,7 +2029,7 @@ Keyword arguments:
* sock -- a python socket instance
-### report\_progress
+### report_progress
```python
report_progress(sock, verbosity, msg) -> None
@@ -1877,9 +2037,11 @@ report_progress(sock, verbosity, msg) -> None
Report progress events.
-This function makes it possible to report transaction/action progress from user code.
+This function makes it possible to report transaction/action progress
+from user code.
-This function is deprecated and will be removed in a future release. Use progress\_info() instead.
+This function is deprecated and will be removed in a future release.
+Use progress_info() instead.
Keyword arguments:
@@ -1888,7 +2050,7 @@ Keyword arguments:
* verbosity -- at which verbosity level the message should be reported
* msg -- message to report
-### report\_progress2
+### report_progress2
```python
report_progress2(sock, verbosity, msg, package) -> None
@@ -1896,9 +2058,11 @@ report_progress2(sock, verbosity, msg, package) -> None
Report progress events.
-This function makes it possible to report transaction/action progress from user code.
+This function makes it possible to report transaction/action progress
+from user code.
-This function is deprecated and will be removed in a future release. Use progress\_info() instead.
+This function is deprecated and will be removed in a future release.
+Use progress_info() instead.
Keyword arguments:
@@ -1908,17 +2072,20 @@ Keyword arguments:
* msg -- message to report
* package -- from what package the message is reported
-### report\_progress\_start
+### report_progress_start
```python
report_progress_start(sock, verbosity, msg, package) -> int
```
-Report progress events. Used for calculation of the duration between two events.
+Report progress events.
+Used for calculation of the duration between two events.
-This function makes it possible to report transaction/action progress from user code.
+This function makes it possible to report transaction/action progress
+from user code.
-This function is deprecated and will be removed in a future release. Use start\_progress\_span() instead.
+This function is deprecated and will be removed in a future release.
+Use start_progress_span() instead.
Keyword arguments:
@@ -1928,18 +2095,21 @@ Keyword arguments:
* msg -- message to report
* package -- from what package the message is reported (only NCS)
-### report\_progress\_stop
+### report_progress_stop
```python
report_progress_stop(sock, verbosity, msg, annotation,
package, timestamp) -> int
```
-Report progress events. Used for calculation of the duration between two events.
+Report progress events.
+Used for calculation of the duration between two events.
-This function makes it possible to report transaction/action progress from user code.
+This function makes it possible to report transaction/action progress
+from user code.
-This function is deprecated and will be removed in a future release. Use end\_progress\_span() instead.
+This function is deprecated and will be removed in a future release.
+Use end_progress_span() instead.
Keyword arguments:
@@ -1947,11 +2117,12 @@ Keyword arguments:
* thandle -- transaction handle
* verbosity -- at which verbosity level the message should be reported
* msg -- message to report
-* annotation -- metadata about the event, indicating error, explains latency or shows result etc
+* annotation -- metadata about the event, indicating error, explains latency
+ or shows result etc
* package -- from what package the message is reported (only NCS)
* timestamp -- start of the event
-### report\_service\_progress
+### report_service_progress
```python
report_service_progress(sock, verbosity, msg, path) -> None
@@ -1959,9 +2130,11 @@ report_service_progress(sock, verbosity, msg, path) -> None
Report progress events for a service.
-This function makes it possible to report transaction progress from FASTMAP code.
+This function makes it possible to report transaction progress
+from FASTMAP code.
-This function is deprecated and will be removed in a future release. Use progress\_info() instead.
+This function is deprecated and will be removed in a future release.
+Use progress_info() instead.
Keyword arguments:
@@ -1971,7 +2144,7 @@ Keyword arguments:
* msg -- message to report
* path -- service instance path
-### report\_service\_progress2
+### report_service_progress2
```python
report_service_progress2(sock, verbosity, msg, package, path) -> None
@@ -1979,9 +2152,11 @@ report_service_progress2(sock, verbosity, msg, package, path) -> None
Report progress events for a service.
-This function makes it possible to report transaction progress from FASTMAP code.
+This function makes it possible to report transaction progress
+from FASTMAP code.
-This function is deprecated and will be removed in a future release. Use progress\_info() instead.
+This function is deprecated and will be removed in a future release.
+Use progress_info() instead.
Keyword arguments:
@@ -1992,17 +2167,20 @@ Keyword arguments:
* package -- from what package the message is reported
* path -- service instance path
-### report\_service\_progress\_start
+### report_service_progress_start
```python
report_service_progress_start(sock, verbosity, msg, package, path) -> int
```
-Report progress events for a service. Used for calculation of the duration between two events.
+Report progress events for a service.
+Used for calculation of the duration between two events.
-This function makes it possible to report transaction progress from FASTMAP code.
+This function makes it possible to report transaction progress
+from FASTMAP code.
-This function is deprecated and will be removed in a future release. Use start\_progress\_span() instead.
+This function is deprecated and will be removed in a future release.
+Use start_progress_span() instead.
Keyword arguments:
@@ -2013,18 +2191,21 @@ Keyword arguments:
* package -- from what package the message is reported
* path -- service instance path
-### report\_service\_progress\_stop
+### report_service_progress_stop
```python
report_service_progress_stop(sock, verbosity, msg, annotation,
package, path) -> None
```
-Report progress events for a service. Used for calculation of the duration between two events.
+Report progress events for a service.
+Used for calculation of the duration between two events.
-This function makes it possible to report transaction progress from FASTMAP code.
+This function makes it possible to report transaction progress
+from FASTMAP code.
-This function is deprecated and will be removed in a future release. Use end\_progress\_span() instead.
+This function is deprecated and will be removed in a future release.
+Use end_progress_span() instead.
Keyword arguments:
@@ -2032,12 +2213,13 @@ Keyword arguments:
* thandle -- transaction handle
* verbosity -- at which verbosity level the message should be reported
* msg -- message to report
-* annotation -- metadata about the event, indicating error, explains latency or shows result etc
+* annotation -- metadata about the event, indicating error, explains latency
+ or shows result etc
* package -- from what package the message is reported
* path -- service instance path
* timestamp -- start of the event
-### request\_action
+### request_action
```python
request_action(sock, params, hashed_ns, path) -> list
@@ -2049,16 +2231,17 @@ Keyword arguments:
* sock -- a python socket instance
* params -- tagValue parameters for the action
-* hashed\_ns -- namespace
+* hashed_ns -- namespace
* path -- path to action
-### request\_action\_str\_th
+### request_action_str_th
```python
request_action_str_th(sock, thandle, cmd, path) -> str
```
-The same as request\_action\_th but takes the parameters as a string and returns the result as a string.
+The same as request_action_th but takes the parameters as a string and
+returns the result as a string.
Keyword arguments:
@@ -2067,13 +2250,13 @@ Keyword arguments:
* cmd -- string parameters
* path -- path to action
-### request\_action\_th
+### request_action_th
```python
request_action_th(sock, thandle, params, path) -> list
```
-Same as for request\_action() but uses the current namespace.
+Same as for request_action() but uses the current namespace.
Keyword arguments:
@@ -2095,13 +2278,15 @@ Keyword arguments:
* sock -- a python socket instance
* thandle -- transaction handle
-### roll\_config
+### roll_config
```python
roll_config(sock, thandle, path) -> int
```
-This function can be used to save the equivalent of a rollback file for a given configuration before it is committed (or a subtree thereof) in curly bracket format. Returns an id
+This function can be used to save the equivalent of a rollback file for a
+given configuration before it is committed (or a subtree thereof) in curly
+bracket format. Returns an id
Keyword arguments:
@@ -2109,64 +2294,64 @@ Keyword arguments:
* thandle -- transaction handle
* path -- tree for which to save the rollback configuration
-### roll\_config\_result
+### roll_config_result
```python
roll_config_result(sock, id) -> int
```
-We use this function to assert that we received the entire rollback configuration over a stream socket.
+We use this function to assert that we received the entire rollback
+configuration over a stream socket.
Keyword arguments:
* sock -- a python socket instance
-* id -- the id returned from roll\_config()
+* id -- the id returned from roll_config()
-### save\_config
+### save_config
```python
save_config(sock, thandle, flags, path) -> int
```
-Save the config, returns an id. The flags parameter controls the saving as follows. The value is a bitmask.
-
-```
- CONFIG_XML -- The configuration format is XML.
- CONFIG_XML_PRETTY -- The configuration format is pretty printed XML.
- CONFIG_JSON -- The configuration is in JSON format.
- CONFIG_J -- The configuration is in curly bracket Juniper CLI
- format.
- CONFIG_C -- The configuration is in Cisco XR style format.
- CONFIG_TURBO_C -- The configuration is in Cisco XR style format.
- A faster parser than the normal CLI will be used.
- CONFIG_C_IOS -- The configuration is in Cisco IOS style format.
- CONFIG_XPATH -- The path gives an XPath filter instead of a
- keypath. Can only be used with CONFIG_XML and
- CONFIG_XML_PRETTY.
- CONFIG_WITH_DEFAULTS -- Default values are part of the
- configuration dump.
- CONFIG_SHOW_DEFAULTS -- Default values are also shown next to
- the real configuration value. Applies only to the CLI formats.
- CONFIG_WITH_OPER -- Include operational data in the dump.
- CONFIG_HIDE_ALL -- Hide all hidden nodes.
- CONFIG_UNHIDE_ALL -- Unhide all hidden nodes.
- CONFIG_WITH_SERVICE_META -- Include NCS service-meta-data
- attributes(refcounter, backpointer, out-of-band and
- original-value) in the dump.
- CONFIG_NO_PARENTS -- When a path is provided its parent nodes are by
- default included. With this option the output will begin
- immediately at path - skipping any parents.
- CONFIG_OPER_ONLY -- Include only operational data, and ancestors to
- operational data nodes, in the dump.
- CONFIG_NO_BACKQUOTE -- This option can only be used together with
- CONFIG_C and CONFIG_C_IOS. When set backslash will not be quoted
- in strings.
- CONFIG_CDB_ONLY -- Include only data stored in CDB in the dump. By
- default only configuration data is included, but the flag can be
- combined with either CONFIG_WITH_OPER or CONFIG_OPER_ONLY to
- save both configuration and operational data, or only
- operational data, respectively.
-```
+Save the config, returns an id.
+The flags parameter controls the saving as follows. The value is a bitmask.
+
+ CONFIG_XML -- The configuration format is XML.
+ CONFIG_XML_PRETTY -- The configuration format is pretty printed XML.
+ CONFIG_JSON -- The configuration is in JSON format.
+ CONFIG_J -- The configuration is in curly bracket Juniper CLI
+ format.
+ CONFIG_C -- The configuration is in Cisco XR style format.
+ CONFIG_TURBO_C -- The configuration is in Cisco XR style format.
+ A faster parser than the normal CLI will be used.
+ CONFIG_C_IOS -- The configuration is in Cisco IOS style format.
+ CONFIG_XPATH -- The path gives an XPath filter instead of a
+ keypath. Can only be used with CONFIG_XML and
+ CONFIG_XML_PRETTY.
+ CONFIG_WITH_DEFAULTS -- Default values are part of the
+ configuration dump.
+ CONFIG_SHOW_DEFAULTS -- Default values are also shown next to
+ the real configuration value. Applies only to the CLI formats.
+ CONFIG_WITH_OPER -- Include operational data in the dump.
+ CONFIG_HIDE_ALL -- Hide all hidden nodes.
+ CONFIG_UNHIDE_ALL -- Unhide all hidden nodes.
+ CONFIG_WITH_SERVICE_META -- Include NCS service-meta-data
+ attributes(refcounter, backpointer, out-of-band and
+ original-value) in the dump.
+ CONFIG_NO_PARENTS -- When a path is provided its parent nodes are by
+ default included. With this option the output will begin
+ immediately at path - skipping any parents.
+ CONFIG_OPER_ONLY -- Include only operational data, and ancestors to
+ operational data nodes, in the dump.
+ CONFIG_NO_BACKQUOTE -- This option can only be used together with
+ CONFIG_C and CONFIG_C_IOS. When set backslash will not be quoted
+ in strings.
+ CONFIG_CDB_ONLY -- Include only data stored in CDB in the dump. By
+ default only configuration data is included, but the flag can be
+ combined with either CONFIG_WITH_OPER or CONFIG_OPER_ONLY to
+ save both configuration and operational data, or only
+ operational data, respectively.
Keyword arguments:
@@ -2175,7 +2360,7 @@ Keyword arguments:
* flags -- as above
* path -- save only configuration below path
-### save\_config\_result
+### save_config_result
```python
save_config_result(sock, id) -> None
@@ -2186,9 +2371,9 @@ Verify that we received the entire configuration over the stream socket.
Keyword arguments:
* sock -- a python socket instance
-* id -- the id returned from save\_config
+* id -- the id returned from save_config
-### set\_attr
+### set_attr
```python
set_attr(sock, thandle, attr, v, keypath) -> None
@@ -2204,13 +2389,14 @@ Keyword arguments:
* v -- value to set the attribute to
* keypath -- path to choice
-### set\_comment
+### set_comment
```python
set_comment(sock, thandle, comment) -> None
```
-Set the Comment that is stored in the rollback file when a transaction towards running is committed.
+Set the Comment that is stored in the rollback file when a transaction
+towards running is committed.
Keyword arguments:
@@ -2218,13 +2404,14 @@ Keyword arguments:
* thandle -- transaction handle
* comment -- the Comment
-### set\_delayed\_when
+### set_delayed_when
```python
set_delayed_when(sock, thandle, on) -> None
```
-This function enables (on non-zero) or disables (on == 0) the 'delayed when' mode of a transaction.
+This function enables (on non-zero) or disables (on == 0) the 'delayed when'
+mode of a transaction.
Keyword arguments:
@@ -2232,7 +2419,7 @@ Keyword arguments:
* thandle -- transaction handle
* on -- disables when on=0, enables for all other n
-### set\_elem
+### set_elem
```python
set_elem(sock, thandle, v, path) -> None
@@ -2247,7 +2434,7 @@ Keyword arguments:
* v -- confdValue
* path -- position of elem
-### set\_elem2
+### set_elem2
```python
set_elem2(sock, thandle, strval, path) -> None
@@ -2262,13 +2449,13 @@ Keyword arguments:
* strval -- confdValue
* path -- position of elem
-### set\_flags
+### set_flags
```python
set_flags(sock, thandle, flags) -> None
```
-Modify read/write session aspect. See MAAPI\_FLAG\_xyz.
+Modify read/write session aspect. See MAAPI_FLAG_xyz.
Keyword arguments:
@@ -2276,13 +2463,14 @@ Keyword arguments:
* thandle -- transaction handle
* flags -- flags to set
-### set\_label
+### set_label
```python
set_label(sock, thandle, label) -> None
```
-Set the Label that is stored in the rollback file when a transaction towards running is committed.
+Set the Label that is stored in the rollback file when a transaction
+towards running is committed.
Keyword arguments:
@@ -2290,7 +2478,7 @@ Keyword arguments:
* thandle -- transaction handle
* label -- the Label
-### set\_namespace
+### set_namespace
```python
set_namespace(sock, thandle, hashed_ns) -> None
@@ -2302,22 +2490,25 @@ Keyword arguments:
* sock -- a python socket instance
* thandle -- transaction handle
-* hashed\_ns -- the namespace to use
+* hashed_ns -- the namespace to use
-### set\_next\_user\_session\_id
+### set_next_user_session_id
```python
set_next_user_session_id(sock, usessid) -> None
```
-Set the user session id that will be assigned to the next user session started. The given value is silently forced to be in the range 100 .. 2^31-1. This function can be used to ensure that session ids for user sessions started by northbound agents or via MAAPI are unique across a restart.
+Set the user session id that will be assigned to the next user session
+started. The given value is silently forced to be in the range 100 .. 2^31-1.
+This function can be used to ensure that session ids for user sessions
+started by northbound agents or via MAAPI are unique across a restart.
Keyword arguments:
* sock -- a python socket instance
* usessid -- user session id
-### set\_object
+### set_object
```python
set_object(sock, thandle, values, keypath) -> None
@@ -2332,7 +2523,7 @@ Keyword arguments:
* values -- list of values
* keypath -- path to set
-### set\_readonly\_mode
+### set_readonly_mode
```python
set_readonly_mode(sock, flag) -> None
@@ -2345,7 +2536,7 @@ Keyword arguments:
* sock -- a python socket instance
* flag -- non-zero means read-only mode
-### set\_running\_db\_status
+### set_running_db_status
```python
set_running_db_status(sock, status) -> None
@@ -2358,7 +2549,7 @@ Keyword arguments:
* sock -- a python socket instance
* status -- integer status to set
-### set\_user\_session
+### set_user_session
```python
set_user_session(sock, usessid) -> None
@@ -2371,7 +2562,7 @@ Keyword arguments:
* sock -- a python socket instance
* usessid -- user session id
-### set\_values
+### set_values
```python
set_values(sock, thandle, values, keypath) -> None
@@ -2386,13 +2577,13 @@ Keyword arguments:
* values -- list of tagValues
* keypath -- path to set
-### shared\_apply\_template
+### shared_apply_template
```python
shared_apply_template(sock, thandle, template, variables,flags, rootpath) -> None
```
-FASTMAP version of ncs\_apply\_template.
+FASTMAP version of ncs_apply_template.
Keyword arguments:
@@ -2403,13 +2594,13 @@ Keyword arguments:
* flags -- Must be set as 0
* rootpath -- in what context to apply the template
-### shared\_copy\_tree
+### shared_copy_tree
```python
shared_copy_tree(sock, thandle, flags, frompath, topath) -> None
```
-FASTMAP version of copy\_tree.
+FASTMAP version of copy_tree.
Keyword arguments:
@@ -2419,7 +2610,7 @@ Keyword arguments:
* frompath -- the path to copy the tree from
* topath -- the path to copy the tree to
-### shared\_create
+### shared_create
```python
shared_create(sock, thandle, flags, path) -> None
@@ -2433,7 +2624,7 @@ Keyword arguments:
* thandle -- transaction handle
* flags -- Must be set as 0
-### shared\_insert
+### shared_insert
```python
shared_insert(sock, thandle, flags, path) -> None
@@ -2448,13 +2639,13 @@ Keyword arguments:
* flags -- Must be set as 0
* path -- the path to the list to insert a new entry into
-### shared\_set\_elem
+### shared_set_elem
```python
shared_set_elem(sock, thandle, v, flags, path) -> None
```
-FASTMAP version of set\_elem.
+FASTMAP version of set_elem.
Keyword arguments:
@@ -2464,13 +2655,13 @@ Keyword arguments:
* flags -- should be 0
* path -- the path to the element to set
-### shared\_set\_elem2
+### shared_set_elem2
```python
shared_set_elem2(sock, thandle, strval, flags, path) -> None
```
-FASTMAP version of set\_elem2.
+FASTMAP version of set_elem2.
Keyword arguments:
@@ -2480,13 +2671,13 @@ Keyword arguments:
* flags -- should be 0
* path -- the path to the element to set
-### shared\_set\_values
+### shared_set_values
```python
shared_set_values(sock, thandle, values, flags, keypath) -> None
```
-FASTMAP version of set\_values.
+FASTMAP version of set_values.
Keyword arguments:
@@ -2496,7 +2687,7 @@ Keyword arguments:
* flags -- should be 0
* keypath -- path to set
-### snmpa\_reload
+### snmpa_reload
```python
snmpa_reload(sock, synchronous) -> None
@@ -2504,149 +2695,184 @@ snmpa_reload(sock, synchronous) -> None
Start a reload of SNMP Agent config from external data provider.
-Used by external data provider to notify that there is a change to the SNMP Agent config data. Calling the function with the argument 'synchronous' set to 1 or True means that the call will block until the loading is completed.
+Used by external data provider to notify that there is a change to the SNMP
+Agent config data. Calling the function with the argument 'synchronous' set
+to 1 or True means that the call will block until the loading is completed.
Keyword arguments:
* sock -- a python socket instance
-* synchronous -- if 1, will wait for the loading complete and return when the loading is complete; if 0, will only initiate the loading and return immediately
+* synchronous -- if 1, will wait for the loading complete and return when
+ the loading is complete; if 0, will only initiate the loading and return
+ immediately
-### start\_phase
+### start_phase
```python
start_phase(sock, phase, synchronous) -> None
```
-When the system has been started in phase0, this function tells the system to proceed to start phase 1 or 2.
+When the system has been started in phase0, this function tells the system
+to proceed to start phase 1 or 2.
Keyword arguments:
* sock -- a python socket instance
* phase -- phase to start, 1 or 2
-* synchronous -- if 1, will wait for the loading complete and return when the loading is complete; if 0, will only initiate the loading of AAA data and return immediately
+* synchronous -- if 1, will wait for the loading complete and return when
+ the loading is complete; if 0, will only initiate the loading of AAA
+ data and return immediately
-### start\_progress\_span
+### start_progress_span
```python
start_progress_span(sock, msg, verbosity, attrs, links, path) -> dict
```
-Starts a progress span. Progress spans are trace messages written to the progress trace and the developer log. A progress span consists of a start and a stop event which can be used to calculate the duration between the two. Those events can be identified with unique span-ids. Inside the span it is possible to start new spans, which will then become child spans, the parent-span-id is set to the previous spans' span-id. A child span can be used to calculate the duration of a sub task, and is started from consecutive maapi\_start\_progress\_span() calls, and is ended with maapi\_end\_progress\_span().
+Starts a progress span. Progress spans are trace messages written to the
+progress trace and the developer log. A progress span consists of a start
+and a stop event which can be used to calculate the duration between the
+two. Those events can be identified with unique span-ids. Inside the span
+it is possible to start new spans, which will then become child spans,
+the parent-span-id is set to the previous spans' span-id. A child span
+can be used to calculate the duration of a sub task, and is started from
+consecutive maapi_start_progress_span() calls, and is ended with
+maapi_end_progress_span().
-The concepts of traces, trace-id and spans are highly influenced by https://opentelemetry.io/docs/concepts/signals/traces/#spans
+The concepts of traces, trace-id and spans are highly influenced by
+https://opentelemetry.io/docs/concepts/signals/traces/#spans
Keyword arguments:
* sock -- a python socket instance
* msg -- message to report
-* verbosity -- VERBOSITY\_\*, default: VERBOSITY\_NORMAL (optional)
+* verbosity -- VERBOSITY_*, default: VERBOSITY_NORMAL (optional)
* attrs -- user defined attributes (dict)
-* links -- to existing traces or spans \[{'trace\_id':'...', 'span\_id':'...'}]
+* links -- to existing traces or spans [{'trace_id':'...', 'span_id':'...'}]
* path -- keypath to an action/leaf/service
-### start\_progress\_span\_th
+### start_progress_span_th
```python
start_progress_span_th(sock, thandle, msg, verbosity,
attrs, links, path) -> dict
```
-Starts a progress span. Progress spans are trace messages written to the progress trace and the developer log. A progress span consists of a start and a stop event which can be used to calculate the duration between the two. Those events can be identified with unique span-ids. Inside the span it is possible to start new spans, which will then become child spans, the parent-span-id is set to the previous spans' span-id. A child span can be used to calculate the duration of a sub task, and is started from consecutive maapi\_start\_progress\_span() calls, and is ended with maapi\_end\_progress\_span().
+Starts a progress span. Progress spans are trace messages written to the
+progress trace and the developer log. A progress span consists of a start
+and a stop event which can be used to calculate the duration between the
+two. Those events can be identified with unique span-ids. Inside the span
+it is possible to start new spans, which will then become child spans,
+the parent-span-id is set to the previous spans' span-id. A child span
+can be used to calculate the duration of a sub task, and is started from
+consecutive maapi_start_progress_span() calls, and is ended with
+maapi_end_progress_span().
-The concepts of traces, trace-id and spans are highly influenced by https://opentelemetry.io/docs/concepts/signals/traces/#spans
+The concepts of traces, trace-id and spans are highly influenced by
+https://opentelemetry.io/docs/concepts/signals/traces/#spans
Keyword arguments:
* sock -- a python socket instance
* thandle -- transaction handle
* msg -- message to report
-* verbosity -- VERBOSITY\_\*, default: VERBOSITY\_NORMAL (optional)
+* verbosity -- VERBOSITY_*, default: VERBOSITY_NORMAL (optional)
* attrs -- user defined attributes (dict)
-* links -- to existing traces or spans \[{'trace\_id':'...', 'span\_id':'...'}]
+* links -- to existing traces or spans [{'trace_id':'...', 'span_id':'...'}]
* path -- keypath to an action/leaf/service
-### start\_trans
+### start_trans
```python
start_trans(sock, name, readwrite) -> int
```
-Creates a new transaction towards the data store specified by name, which can be one of CONFD\_CANDIDATE, CONFD\_RUNNING, or CONFD\_STARTUP (however updating the startup data store is better done via maapi\_copy\_running\_to\_startup()). The readwrite parameter can be either CONFD\_READ, to start a readonly transaction, or CONFD\_READ\_WRITE, to start a read-write transaction. The function returns the transaction id.
+Creates a new transaction towards the data store specified by name, which
+can be one of CONFD_CANDIDATE, CONFD_RUNNING, or CONFD_STARTUP (however
+updating the startup data store is better done via
+maapi_copy_running_to_startup()). The readwrite parameter can be either
+CONFD_READ, to start a readonly transaction, or CONFD_READ_WRITE, to start
+a read-write transaction. The function returns the transaction id.
Keyword arguments:
* sock -- a python socket instance
* name -- name of the database
-* readwrite -- CONFD\_READ or CONFD\_WRITE
+* readwrite -- CONFD_READ or CONFD_WRITE
-### start\_trans2
+### start_trans2
```python
start_trans2(sock, name, readwrite, usid) -> int
```
-Start a transaction within an existing user session, returns the transaction id.
+Start a transaction within an existing user session, returns the transaction
+id.
Keyword arguments:
* sock -- a python socket instance
* name -- name of the database
-* readwrite -- CONFD\_READ or CONFD\_WRITE
+* readwrite -- CONFD_READ or CONFD_WRITE
* usid -- user session id
-### start\_trans\_flags
+### start_trans_flags
```python
start_trans_flags(sock, name, readwrite, usid) -> int
```
-The same as start\_trans2, but can also set the same flags that 'set\_flags' can set.
+The same as start_trans2, but can also set the same flags that 'set_flags'
+can set.
Keyword arguments:
* sock -- a python socket instance
* name -- name of the database
-* readwrite -- CONFD\_READ or CONFD\_WRITE
+* readwrite -- CONFD_READ or CONFD_WRITE
* usid -- user session id
-* flags -- same as for 'set\_flags'
+* flags -- same as for 'set_flags'
-### start\_trans\_flags2
+### start_trans_flags2
```python
start_trans_flags2(sock, name, readwrite, usid, vendor, product, version,
client_id) -> int
```
-This function does the same as start\_trans\_flags() but allows for additional information to be passed to ConfD/NCS.
+This function does the same as start_trans_flags() but allows for
+additional information to be passed to ConfD/NCS.
Keyword arguments:
* sock -- a python socket instance
* name -- name of the database
-* readwrite -- CONFD\_READ or CONFD\_WRITE
+* readwrite -- CONFD_READ or CONFD_WRITE
* usid -- user session id
-* flags -- same as for 'set\_flags'
+* flags -- same as for 'set_flags'
* vendor -- vendor string (may be None)
* product -- product string (may be None)
* version -- version string (may be None)
-* client\_id -- client identification string (may be None)
+* client_id -- client identification string (may be None)
-### start\_trans\_in\_trans
+### start_trans_in_trans
```python
start_trans_in_trans(sock, readwrite, usid, thandle) -> int
```
-Start a transaction within an existing transaction, using the started transaction as backend instead of an actual data store. Returns the transaction id as an integer.
+Start a transaction within an existing transaction, using the started
+transaction as backend instead of an actual data store. Returns the
+transaction id as an integer.
Keyword arguments:
* sock -- a python socket instance
-* readwrite -- CONFD\_READ or CONFD\_WRITE
+* readwrite -- CONFD_READ or CONFD_WRITE
* usid -- user session id
* thandle -- identifies the backend transaction to use
-### start\_user\_session
+### start_user_session
```python
start_user_session(sock, username, context, groups, src_addr, prot) -> None
@@ -2663,7 +2889,7 @@ Keyword arguments:
* src-addr -- src address of e.g. the client connecting
* prot -- the protocol used by the client for connecting
-### start\_user\_session2
+### start_user_session2
```python
start_user_session2(sock, username, context, groups, src_addr, src_port, prot) -> None
@@ -2681,7 +2907,7 @@ Keyword arguments:
* src-port -- src port of e.g. the client connecting
* prot -- the protocol used by the client for connecting
-### start\_user\_session3
+### start_user_session3
```python
start_user_session3(sock, username, context, groups, src_addr, src_port, prot, vendor, product, version, client_id) -> None
@@ -2689,7 +2915,8 @@ start_user_session3(sock, username, context, groups, src_addr, src_port, prot, v
Establish a user session on the socket.
-This function does the same as start\_user\_session2() but allows for additional information to be passed to ConfD/NCS.
+This function does the same as start_user_session2() but allows for
+additional information to be passed to ConfD/NCS.
Keyword arguments:
@@ -2703,9 +2930,9 @@ Keyword arguments:
* vendor -- vendor string (may be None)
* product -- product string (may be None)
* version -- version string (may be None)
-* client\_id -- client identification string (may be None)
+* client_id -- client identification string (may be None)
-### start\_user\_session\_gen
+### start_user_session_gen
```python
start_user_session_gen(sock, username, context, groups, vendor, product, version, client_id) -> None
@@ -2713,7 +2940,8 @@ start_user_session_gen(sock, username, context, groups, vendor, product, versio
Establish a user session on the socket.
-This function does the same as start\_user\_session3() but it takes the source address of the supplied socket from the OS.
+This function does the same as start_user_session3() but
+it takes the source address of the supplied socket from the OS.
Keyword arguments:
@@ -2724,7 +2952,7 @@ Keyword arguments:
* vendor -- vendor string (may be None)
* product -- product string (may be None)
* version -- version string (may be None)
-* client\_id -- client identification string (may be None)
+* client_id -- client identification string (may be None)
### stop
@@ -2738,13 +2966,14 @@ Keyword arguments:
* sock -- a python socket instance
-### sys\_message
+### sys_message
```python
sys_message(sock, to, message) -> None
```
-Send a message to a specific user, a specific session or all user depending on the 'to' parameter. 'all', or can be used.
+Send a message to a specific user, a specific session or all user depending
+on the 'to' parameter. 'all', or can be used.
Keyword arguments:
@@ -2752,19 +2981,20 @@ Keyword arguments:
* to -- user to send message to or 'all' to send to all users
* message -- the message
-### unhide\_group
+### unhide_group
```python
unhide_group(sock, thandle, group_name) -> None
```
-Unhide all nodes belonging to a hide group in a transaction that started with flag FLAG\_HIDE\_ALL\_HIDEGROUPS.
+Unhide all nodes belonging to a hide group in a transaction that started
+with flag FLAG_HIDE_ALL_HIDEGROUPS.
Keyword arguments:
* sock -- a python socket instance
* thandle -- transaction handle
-* group\_name -- the group name
+* group_name -- the group name
### unlock
@@ -2779,7 +3009,7 @@ Keyword arguments:
* sock -- a python socket instance
* name -- name of the database to unlock
-### unlock\_partial
+### unlock_partial
```python
unlock_partial(sock, lockid) -> None
@@ -2792,7 +3022,7 @@ Keyword arguments:
* sock -- a python socket instance
* lockid -- id of the lock
-### user\_message
+### user_message
```python
user_message(sock, to, message, sender) -> None
@@ -2807,7 +3037,7 @@ Keyword arguments:
* message -- the message
* sender -- send as
-### validate\_trans
+### validate_trans
```python
validate_trans(sock, thandle, unlock, forcevalidation) -> None
@@ -2815,11 +3045,20 @@ validate_trans(sock, thandle, unlock, forcevalidation) -> None
Validates all data written in a transaction.
-If unlock is 1 (or True), the transaction is open for further editing even if validation succeeds. If unlock is 0 (or False) and the function returns CONFD\_OK, the next function to be called MUST be maapi\_prepare\_trans() or maapi\_finish\_trans().
+If unlock is 1 (or True), the transaction is open for further editing even
+if validation succeeds. If unlock is 0 (or False) and the function returns
+CONFD_OK, the next function to be called MUST be maapi_prepare_trans() or
+maapi_finish_trans().
-unlock = 1 can be used to implement a 'validate' command which can be given in the middle of an editing session. The first thing that happens is that a lock is set. If unlock == 1, the lock is released on success. The lock is always released on failure.
+unlock = 1 can be used to implement a 'validate' command which can be
+given in the middle of an editing session. The first thing that happens is
+that a lock is set. If unlock == 1, the lock is released on success. The
+lock is always released on failure.
-The forcevalidation argument should normally be 0 (or False). It has no effect for a transaction towards the running or startup data stores, validation is always performed. For a transaction towards the candidate data store, validation will not be done unless forcevalidation is non-zero.
+The forcevalidation argument should normally be 0 (or False). It has no
+effect for a transaction towards the running or startup data stores,
+validation is always performed. For a transaction towards the candidate
+data store, validation will not be done unless forcevalidation is non-zero.
Keyword arguments:
@@ -2828,7 +3067,7 @@ Keyword arguments:
* unlock -- int or bool
* forcevalidation -- int or bool
-### wait\_start
+### wait_start
```python
wait_start(sock, phase) -> None
@@ -2841,7 +3080,7 @@ Keyword arguments:
* sock -- a python socket instance
* phase -- phase to wait for, 0, 1 or 2
-### write\_service\_log\_entry
+### write_service_log_entry
```python
write_service_log_entry(sock, path, msg, type, level) -> None
@@ -2849,7 +3088,8 @@ write_service_log_entry(sock, path, msg, type, level) -> None
Write service log entries.
-This function makes it possible to write service log entries from FASTMAP code.
+This function makes it possible to write service log entries from
+FASTMAP code.
Keyword arguments:
@@ -2872,7 +3112,7 @@ Keyword arguments:
* sock -- a python socket instance
* xpath -- to convert
-### xpath2kpath\_th
+### xpath2kpath_th
```python
xpath2kpath_th(sock, thandle, xpath) -> _ncs.HKeypathRef
@@ -2886,13 +3126,21 @@ Keyword arguments:
* thandle -- transaction handle
* xpath -- to convert
-### xpath\_eval
+### xpath_eval
```python
xpath_eval(sock, thandle, expr, result, trace, path) -> None
```
-Evaluate the xpath expression in 'expr'. For each node in the resulting node the function 'result' is called with the keypath to the resulting node as the first argument and, if the node is a leaf and has a value. the value of that node as the second argument. For each invocation of 'result' the function should return ITER\_CONTINUE to tell the XPath evaluator to continue or ITER\_STOP to stop the evaluation. A trace function, 'pytrace', could be supplied and will be called with a single string as an argument. 'None' can be used if no trace is needed. Unless a 'path' is given the root node will be used as a context for the evaluations.
+Evaluate the xpath expression in 'expr'. For each node in the resulting
+node the function 'result' is called with the keypath to the resulting
+node as the first argument and, if the node is a leaf and has a value. the
+value of that node as the second argument. For each invocation of 'result'
+the function should return ITER_CONTINUE to tell the XPath evaluator to
+continue or ITER_STOP to stop the evaluation. A trace function, 'pytrace',
+could be supplied and will be called with a single string as an argument.
+'None' can be used if no trace is needed. Unless a 'path' is given the
+root node will be used as a context for the evaluations.
Keyword arguments:
@@ -2903,13 +3151,13 @@ Keyword arguments:
* trace -- a trace function that takes a string as a parameter
* path -- the context node
-### xpath\_eval\_expr
+### xpath_eval_expr
```python
xpath_eval_expr(sock, thandle, expr, trace, path) -> str
```
-Like xpath\_eval but returns a string.
+Like xpath_eval but returns a string.
Keyword arguments:
@@ -2919,11 +3167,12 @@ Keyword arguments:
* trace -- a trace function that takes a string as a parameter
* path -- the context node
+
## Classes
### _class_ **Cursor**
-struct maapi\_cursor object
+struct maapi_cursor object
Members:
diff --git a/developer-reference/pyapi/_ncs.md b/developer-reference/pyapi/_ncs.md
index cda0def3..29cb1e62 100644
--- a/developer-reference/pyapi/_ncs.md
+++ b/developer-reference/pyapi/_ncs.md
@@ -1,29 +1,33 @@
-# \_ncs Module
+# Python _ncs Module
NCS Python low level module.
-This module and its submodules provide Python bindings for the C APIs, described by the [confd\_lib(3)](../../resources/man/confd_lib.3.md) man page.
+This module and its submodules provide Python bindings for the C APIs,
+described by the [confd_lib(3)](../../resources/man/confd_lib.3.md) man page.
-The companion high level module, ncs, provides an abstraction layer on top of this module and may be easier to use.
+The companion high level module, ncs, provides an abstraction layer on top of
+this module and may be easier to use.
## Submodules
-* [\_ncs.cdb](_ncs.cdb.md): Low level module for connecting to NCS built-in XML database (CDB).
-* [\_ncs.dp](_ncs.dp.md): Low level callback module for connecting data providers to NCS.
-* [\_ncs.error](_ncs.error.md): This module defines new NCS Python API exception classes.
-* [\_ncs.events](_ncs.events.md): Low level module for subscribing to NCS event notifications.
-* [\_ncs.ha](_ncs.ha.md): Low level module for connecting to NCS HA subsystem.
-* [\_ncs.maapi](_ncs.maapi.md): Low level module for connecting to NCS with a read/write interface inside transactions.
+- [_ncs.cdb](_ncs.cdb.md): Low level module for connecting to NCS built-in XML database (CDB).
+- [_ncs.dp](_ncs.dp.md): Low level callback module for connecting data providers to NCS.
+- [_ncs.error](_ncs.error.md): This module defines new NCS Python API exception classes.
+- [_ncs.events](_ncs.events.md): Low level module for subscribing to NCS event notifications.
+- [_ncs.ha](_ncs.ha.md): Low level module for connecting to NCS HA subsystem.
+- [_ncs.maapi](_ncs.maapi.md): Low level module for connecting to NCS with a read/write interface
+inside transactions.
## Functions
-### cs\_node\_cd
+### cs_node_cd
```python
cs_node_cd(start, path) -> Union[CsNode, None]
```
-Utility function which finds the resulting CsNode given an (optional) starting node and a (relative or absolute) string keypath.
+Utility function which finds the resulting CsNode given an (optional)
+starting node and a (relative or absolute) string keypath.
Keyword arguments:
@@ -36,23 +40,28 @@ Keyword arguments:
decrypt(ciphertext) -> str
```
-When data is read over the CDB interface, the MAAPI interface or received in event notifications, the data for the builtin types tailf:aes-cfb-128-encrypted-string and tailf:aes-256-cfb-128-encrypted-string is encrypted. This function decrypts ciphertext and returns the clear text as a string.
+When data is read over the CDB interface, the MAAPI interface or received
+in event notifications, the data for the builtin types
+tailf:aes-cfb-128-encrypted-string and
+tailf:aes-256-cfb-128-encrypted-string is encrypted.
+This function decrypts ciphertext and returns the clear text as
+a string.
Keyword arguments:
* ciphertext -- encrypted string
-### expr\_op2str
+### expr_op2str
```python
expr_op2str(op) -> str
```
-Convert confd\_expr\_op value to a string.
+Convert confd_expr_op value to a string.
Keyword arguments:
-* op -- confd\_expr\_op integer value
+* op -- confd_expr_op integer value
### fatal
@@ -60,84 +69,104 @@ Keyword arguments:
fatal(str) -> None
```
-Utility function which formats a string, prints it to stderr and exits with exit code 1. This function will never return.
+Utility function which formats a string, prints it to stderr and exits with
+exit code 1. This function will never return.
Keyword arguments:
* str -- a message string
-### find\_cs\_node
+### find_cs_node
```python
find_cs_node(hkeypath, len) -> Union[CsNode, None]
```
-Utility function which finds the CsNode corresponding to the len first elements of the hashed keypath. To make the search consider the full keypath leave out the len parameter.
+Utility function which finds the CsNode corresponding to the len first
+elements of the hashed keypath. To make the search consider the full
+keypath leave out the len parameter.
Keyword arguments:
* hkeypath -- a HKeypathRef instance
* len -- number of elements to return (optional)
-### find\_cs\_node\_child
+### find_cs_node_child
```python
find_cs_node_child(parent, xmltag) -> Union[CsNode, None]
```
-Utility function which finds the CsNode corresponding to the child node given as xmltag.
+Utility function which finds the CsNode corresponding to the child node
+given as xmltag.
-See confd\_find\_cs\_node\_child() in [confd\_lib\_lib(3)](../../resources/man/confd_lib_lib.3.md).
+See confd_find_cs_node_child() in [confd_lib_lib(3)](../../resources/man/confd_lib_lib.3.md).
Keyword arguments:
* parent -- the parent CsNode
* xmltag -- the child node
-### find\_cs\_root
+### find_cs_root
```python
find_cs_root(ns) -> Union[CsNode, None]
```
-When schema information is available to the library, this function returns the root of the tree representaton of the namespace given by ns for the (first) toplevel node. For namespaces that are augmented into other namespaces such that they do not have a toplevel node, this function returns None - the nodes of such a namespace are found below the augment target node(s) in other tree(s).
+When schema information is available to the library, this function returns
+the root of the tree representaton of the namespace given by ns for the
+(first) toplevel node. For namespaces that are augmented into other
+namespaces such that they do not have a toplevel node, this function returns
+None - the nodes of such a namespace are found below the augment target
+node(s) in other tree(s).
Keyword arguments:
* ns -- the namespace id
-### find\_ns\_type
+### find_ns_type
```python
find_ns_type(nshash, name) -> Union[CsType, None]
```
-Returns a CsType type definition for the type named name, which is defined in the namespace identified by nshash, or None if the type could not be found. If nshash is 0, the type name will be looked up among the built-in types (i.e. the YANG built-in types, the types defined in the YANG "tailf-common" module, and the types defined in the "confd" and "xs" namespaces).
+Returns a CsType type definition for the type named name, which is defined
+in the namespace identified by nshash, or None if the type could not be
+found. If nshash is 0, the type name will be looked up among the built-in
+types (i.e. the YANG built-in types, the types defined in the YANG
+"tailf-common" module, and the types defined in the "confd" and "xs"
+namespaces).
Keyword arguments:
* nshash -- a namespace hash or 0 (0 searches for built-in types)
* name -- the name of the type
-### get\_leaf\_list\_type
+### get_leaf_list_type
```python
get_leaf_list_type(node) -> CsType
```
-For a leaf-list node, the type() method in the CsNodeInfo identifies a "list type" for the leaf-list "itself". This function returns the type of the elements in the leaf-list, i.e. corresponding to the type substatement for the leaf-list in the YANG module.
+For a leaf-list node, the type() method in the CsNodeInfo identifies a
+"list type" for the leaf-list "itself". This function returns the type
+of the elements in the leaf-list, i.e. corresponding to the type
+substatement for the leaf-list in the YANG module.
Keyword arguments:
* node -- The CsNode of the leaf-list
-### get\_nslist
+### get_nslist
```python
get_nslist() -> list
```
-Provides a list of the namespaces known to the library as a list of five-tuples. Each tuple contains the the namespace hash (int), the prefix (string), the namespace uri (string), the revision (string), and the module name (string).
+Provides a list of the namespaces known to the library as a list of
+five-tuples. Each tuple contains the the namespace hash (int), the prefix
+(string), the namespace uri (string), the revision (string), and the
+module name (string).
If schemas are not loaded an empty list will be returned.
@@ -147,13 +176,15 @@ If schemas are not loaded an empty list will be returned.
hash2str(hash) -> Union[str, None]
```
-Returns a string representing the node name given by hash, or None if the hash value is not found. Requires that schema information has been loaded from the NCS daemon into the library - otherwise it always returns None.
+Returns a string representing the node name given by hash, or None if the
+hash value is not found. Requires that schema information has been loaded
+from the NCS daemon into the library - otherwise it always returns None.
Keyword arguments:
* hash -- a hash
-### hkeypath\_dup
+### hkeypath_dup
```python
hkeypath_dup(hkeypath) -> HKeypathRef
@@ -165,7 +196,7 @@ Keyword arguments:
* hkeypath -- a HKeypathRef instance
-### hkeypath\_dup\_len
+### hkeypath_dup_len
```python
hkeypath_dup_len(hkeypath, len) -> HKeypathRef
@@ -178,26 +209,31 @@ Keyword arguments:
* hkeypath -- a HKeypathRef instance
* len -- number of elements to include in the copy
-### hkp\_prefix\_tagmatch
+### hkp_prefix_tagmatch
```python
hkp_prefix_tagmatch(hkeypath, tags) -> bool
```
-A simplified version of hkp\_tagmatch() - it returns True if the tagpath matches a prefix of the hkeypath, i.e. it is equivalent to calling hkp\_tagmatch() and checking if the return value includes CONFD\_HKP\_MATCH\_TAGS.
+A simplified version of hkp_tagmatch() - it returns True if the tagpath
+matches a prefix of the hkeypath, i.e. it is equivalent to calling
+hkp_tagmatch() and checking if the return value includes CONFD_HKP_MATCH_TAGS.
Keyword arguments:
* hkeypath -- a HKeypathRef instance
* tags -- a list of XmlTag instances
-### hkp\_tagmatch
+### hkp_tagmatch
```python
hkp_tagmatch(hkeypath, tags) -> int
```
-When checking the hkeypaths that get passed into each iteration in e.g. cdb\_diff\_iterate() we can either explicitly check the paths, or use this function to do the job. The tags list (typically statically initialized) specifies a tagpath to match against the hkeypath. See cdb\_diff\_match().
+When checking the hkeypaths that get passed into each iteration in e.g.
+cdb_diff_iterate() we can either explicitly check the paths, or use this
+function to do the job. The tags list (typically statically initialized)
+specifies a tagpath to match against the hkeypath. See cdb_diff_match().
Keyword arguments:
@@ -210,7 +246,9 @@ Keyword arguments:
init(name, file, level) -> None
```
-Initializes the ConfD library. Must be called before any other NCS API functions are called. There should be no need to call this function directly. It is called internally when the Python module is loaded.
+Initializes the ConfD library. Must be called before any other NCS API
+functions are called. There should be no need to call this function
+directly. It is called internally when the Python module is loaded.
Keyword arguments:
@@ -218,7 +256,7 @@ Keyword arguments:
* file -- (optional)
* level -- (optional)
-### internal\_connect
+### internal_connect
```python
internal_connect(id, sock, ip, port, path) -> None
@@ -226,55 +264,67 @@ internal_connect(id, sock, ip, port, path) -> None
Internal function used by NCS Python VM.
-### list\_filter\_type2str
+### list_filter_type2str
```python
list_filter_type2str(op) -> str
```
-Convert confd\_list\_filter\_type value to a string.
+Convert confd_list_filter_type value to a string.
Keyword arguments:
-* type -- confd\_list\_filter\_type integer value
+* type -- confd_list_filter_type integer value
-### max\_object\_size
+### max_object_size
```python
max_object_size(object) -> int
```
-Utility function which returns the maximum size (i.e. the needed length of the confd\_value\_t array) for an "object" retrieved by cdb\_get\_object(), maapi\_get\_object(), and corresponding multi-object functions.
+Utility function which returns the maximum size (i.e. the needed length of
+the confd_value_t array) for an "object" retrieved by cdb_get_object(),
+maapi_get_object(), and corresponding multi-object functions.
Keyword arguments:
* object -- the CsNode
-### mmap\_schemas
+### mmap_schemas
```python
mmap_schemas(filename) -> None
```
-If shared memory schema support has been enabled, this function will will map a shared memory segment into the current process address space and make it ready for use.
+If shared memory schema support has been enabled, this function will
+will map a shared memory segment into the current process address space
+and make it ready for use.
-The filename can be obtained by using the get\_schema\_file\_path() function
+The filename can be obtained by using the get_schema_file_path() function
-The filename argument specifies the pathname of the file that is used as backing store.
+The filename argument specifies the pathname of the file that is used as
+backing store.
Keyword arguments:
* filename -- a filename string
-### next\_object\_node
+### next_object_node
```python
next_object_node(object, cur, value) -> Union[CsNode, None]
```
-Utility function to allow navigation of the confd\_cs\_node schema tree in parallel with the confd\_value\_t array populated by cdb\_get\_object(), maapi\_get\_object(), and corresponding multi-object functions.
+Utility function to allow navigation of the confd_cs_node schema tree in
+parallel with the confd_value_t array populated by cdb_get_object(),
+maapi_get_object(), and corresponding multi-object functions.
-The cur parameter is the CsNode for the current value, and the value parameter is the current value in the array. The function returns a CsNode for the next value in the array, or None when the complete object has been traversed. In the initial call for a given traversal, we must pass self.children() for the cur parameter - this always points to the CsNode for the first value in the array.
+The cur parameter is the CsNode for the current value, and the value
+parameter is the current value in the array. The function returns a CsNode
+for the next value in the array, or None when the complete object has been
+traversed. In the initial call for a given traversal, we must pass
+self.children() for the cur parameter - this always points to the CsNode
+for the first value in the array.
Keyword arguments:
@@ -288,38 +338,42 @@ Keyword arguments:
ns2prefix(ns) -> Union[str, None]
```
-Returns a string giving the namespace prefix for the namespace ns, if the namespace is known to the library - otherwise it returns None.
+Returns a string giving the namespace prefix for the namespace ns, if the
+namespace is known to the library - otherwise it returns None.
Keyword arguments:
* ns -- a namespace hash
-### pp\_kpath
+### pp_kpath
```python
pp_kpath(hkeypath) -> str
```
-Utility function which pretty prints a string representation of the path hkeypath. This will use the NCS curly brace notation, i.e. "/servers/server{www}/ip". Requires that schema information is available to the library.
+Utility function which pretty prints a string representation of the path
+hkeypath. This will use the NCS curly brace notation, i.e.
+"/servers/server{www}/ip". Requires that schema information is available
+to the library.
Keyword arguments:
* hkeypath -- a HKeypathRef instance
-### pp\_kpath\_len
+### pp_kpath_len
```python
pp_kpath_len(hkeypath, len) -> str
```
-A variant of pp\_kpath() that prints only the first len elements of hkeypath.
+A variant of pp_kpath() that prints only the first len elements of hkeypath.
Keyword arguments:
-* hkeypath -- a \_lib.HKeypathRef instance
+* hkeypath -- a _lib.HKeypathRef instance
* len -- number of elements to print
-### set\_debug
+### set_debug
```python
set_debug(level, file) -> None
@@ -332,13 +386,14 @@ Keyword arguments:
* file -- (optional)
* level -- (optional)
-### set\_kill\_child\_on\_parent\_exit
+### set_kill_child_on_parent_exit
```python
set_kill_child_on_parent_exit() -> bool
```
-Instruct the operating system to kill this process if the parent process exits.
+Instruct the operating system to kill this process if the parent process
+exits.
### str2hash
@@ -346,13 +401,15 @@ Instruct the operating system to kill this process if the parent process exits.
str2hash(str) -> int
```
-Returns the hash value representing the node name given by str, or 0 if the string is not found. Requires that schema information has been loaded from the NCS daemon into the library - otherwise it always returns 0.
+Returns the hash value representing the node name given by str, or 0 if the
+string is not found. Requires that schema information has been loaded from
+the NCS daemon into the library - otherwise it always returns 0.
Keyword arguments:
* str -- a name string
-### stream\_connect
+### stream_connect
```python
stream_connect(sock, id, flags, ip, port, path) -> None
@@ -365,27 +422,31 @@ Keyword arguments:
* sock -- a Python socket instance
* id -- id
* flags -- flags
-* ip -- ip address - if sock family is AF\_INET or AF\_INET6 (optional)
-* port -- port - if sock family is AF\_INET or AF\_INET6 (optional)
-* path -- a filename - if sock family is AF\_UNIX (optional)
+* ip -- ip address - if sock family is AF_INET or AF_INET6 (optional)
+* port -- port - if sock family is AF_INET or AF_INET6 (optional)
+* path -- a filename - if sock family is AF_UNIX (optional)
-### xpath\_pp\_kpath
+### xpath_pp_kpath
```python
xpath_pp_kpath(hkeypath) -> str
```
-Utility function which pretty prints a string representation of the path hkeypath. This will format the path as an XPath, i.e. "/servers/server\[name="www"']/ip". Requires that schema information is available to the library.
+Utility function which pretty prints a string representation of the path
+hkeypath. This will format the path as an XPath, i.e.
+"/servers/server[name="www"']/ip". Requires that schema information is
+available to the library.
Keyword arguments:
* hkeypath -- a HKeypathRef instance
+
## Classes
### _class_ **AttrValue**
-This type represents the c-type confd\_attr\_value\_t.
+This type represents the c-type confd_attr_value_t.
The contructor for this type has the following signature:
@@ -416,7 +477,7 @@ attribute value (Value)
### _class_ **AuthorizationInfo**
-This type represents the c-type struct confd\_authorization\_info.
+This type represents the c-type struct confd_authorization_info.
AuthorizationInfo cannot be directly instantiated from Python.
@@ -432,7 +493,7 @@ authorization groups (list of strings)
### _class_ **CsCase**
-This type represents the c-type struct confd\_cs\_case.
+This type represents the c-type struct confd_cs_case.
CsCase cannot be directly instantiated from Python.
@@ -538,7 +599,7 @@ Returns the CsCase tag hash.
### _class_ **CsChoice**
-This type represents the c-type struct confd\_cs\_choice.
+This type represents the c-type struct confd_cs_choice.
CsChoice cannot be directly instantiated from Python.
@@ -658,7 +719,7 @@ Returns the CsChoice tag hash.
### _class_ **CsNode**
-This type represents the c-type struct confd\_cs\_node.
+This type represents the c-type struct confd_cs_node.
CsNode cannot be directly instantiated from Python.
@@ -1044,7 +1105,7 @@ Returns the tag value.
### _class_ **CsNodeInfo**
-This type represents the c-type struct confd\_cs\_node\_info.
+This type represents the c-type struct confd_cs_node_info.
CsNodeInfo cannot be directly instantiated from Python.
@@ -1130,7 +1191,7 @@ Method:
max_occurs() -> int
```
-Returns CsNodeInfo max\_occurs.
+Returns CsNodeInfo max_occurs.
@@ -1144,7 +1205,7 @@ Method:
meta_data() -> Union[Dict, None]
```
-Returns CsNodeInfo meta\_data.
+Returns CsNodeInfo meta_data.
@@ -1158,7 +1219,7 @@ Method:
min_occurs() -> int
```
-Returns CsNodeInfo min\_occurs.
+Returns CsNodeInfo min_occurs.
@@ -1172,7 +1233,7 @@ Method:
shallow_type() -> int
```
-Returns CsNodeInfo shallow\_type.
+Returns CsNodeInfo shallow_type.
@@ -1192,7 +1253,7 @@ Returns CsNodeInfo type.
### _class_ **CsType**
-This type represents the c-type struct confd\_type.
+This type represents the c-type struct confd_type.
CsType cannot be directly instantiated from Python.
@@ -1208,7 +1269,10 @@ Method:
bitbig_size() -> int
```
-Returns the maximum size needed for the byte array for the BITBIG value when a YANG bits type has a highest position above 63. If this is not a BITBIG value or if the highest position is 63 or less, this function will return 0.
+Returns the maximum size needed for the byte array for the BITBIG value
+when a YANG bits type has a highest position above 63. If this is not a
+BITBIG value or if the highest position is 63 or less, this function will
+return 0.
@@ -1242,11 +1306,12 @@ Returns the CsType parent.
### _class_ **DateTime**
-This type represents the c-type struct confd\_datetime.
+This type represents the c-type struct confd_datetime.
The contructor for this type has the following signature:
-DateTime(year, month, day, hour, min, sec, micro, timezone, timezone\_minutes) -> object
+DateTime(year, month, day, hour, min, sec, micro, timezone,
+ timezone_minutes) -> object
Keyword arguments:
@@ -1258,7 +1323,7 @@ Keyword arguments:
* sec -- seconds (int)
* micro -- micro seconds (int)
* timezone -- the timezone (int)
-* timezone\_minutes -- number of timezone\_minutes (int)
+* timezone_minutes -- number of timezone_minutes (int)
Members:
@@ -1336,27 +1401,33 @@ the year
### _class_ **HKeypathRef**
-This type represents the c-type confd\_hkeypath\_t.
+This type represents the c-type confd_hkeypath_t.
-HKeypathRef implements some sequence methods which enables indexing, iteration and length checking. There is also support for slicing, e.g:
+HKeypathRef implements some sequence methods which enables indexing,
+iteration and length checking. There is also support for slicing, e.g:
-Lets say the variable hkp is a valid hkeypath pointing to '/foo/bar{a}/baz' and we slice that object like this:
+Lets say the variable hkp is a valid hkeypath pointing to '/foo/bar{a}/baz'
+and we slice that object like this:
-```
-newhkp = hkp[1:]
-```
+ newhkp = hkp[1:]
-In this case newhkp will be a new hkeypath pointing to '/foo/bar{a}'. Note that the last element must always be included, so trying to create a slice with hkp\[1:2] will fail.
+In this case newhkp will be a new hkeypath pointing to '/foo/bar{a}'.
+Note that the last element must always be included, so trying to create
+a slice with hkp[1:2] will fail.
-The example above could also be written using the dup\_len() method:
+The example above could also be written using the dup_len() method:
-```
-newhkp = hkp.dup_len(3)
-```
+ newhkp = hkp.dup_len(3)
-Retrieving an element of the HKeypathRef when the underlying Value is of type C\_XMLTAG returns a XmlTag instance. In all other cases a tuple of Values is returned.
+Retrieving an element of the HKeypathRef when the underlying Value is of
+type C_XMLTAG returns a XmlTag instance. In all other cases a tuple of
+Values is returned.
-When receiving an HKeypathRef object as on argument in a callback method, the underlying object is only borrowed, so this particular instance is only valid inside that callback method. If one, for some reason, would like to keep the HKeypathRef object 'alive' for any longer than that, use dup() or dup\_len() to get a copy of it. Slicing also creates a copy.
+When receiving an HKeypathRef object as on argument in a callback method,
+the underlying object is only borrowed, so this particular instance is only
+valid inside that callback method. If one, for some reason, would like
+to keep the HKeypathRef object 'alive' for any longer than that, use
+dup() or dup_len() to get a copy of it. Slicing also creates a copy.
HKeypathRef cannot be directly instantiated from Python.
@@ -1396,7 +1467,7 @@ Keyword arguments:
### _class_ **ProgressLink**
-This type represents the c-type struct confd\_progress\_link.
+This type represents the c-type struct confd_progress_link.
confdProgressLink cannot be directly instantiated from Python.
@@ -1420,9 +1491,10 @@ trace id (string)
### _class_ **QueryResult**
-This type represents the c-type struct confd\_query\_result.
+This type represents the c-type struct confd_query_result.
-QueryResult implements some sequence methods which enables indexing, iteration and length checking.
+QueryResult implements some sequence methods which enables indexing,
+iteration and length checking.
QueryResult cannot be directly instantiated from Python.
@@ -1462,7 +1534,7 @@ the query result type (int)
### _class_ **SnmpVarbind**
-This type represents the c-type struct confd\_snmp\_varbind.
+This type represents the c-type struct confd_snmp_varbind.
The contructor for this type has the following signature:
@@ -1470,14 +1542,15 @@ SnmpVarbind(type, val, vartype, name, oid, cr) -> object
Keyword arguments:
-* type -- SNMP\_VARIABLE, SNMP\_OID or SNMP\_COL\_ROW (int)
+* type -- SNMP_VARIABLE, SNMP_OID or SNMP_COL_ROW (int)
* val -- value (Value)
* vartype -- snmp type (optional)
-* name -- mandatory if type is SNMP\_VARIABLE (string)
-* oid -- mandatory if type is SNMP\_OID (list of integers)
-* cr -- mandatory if type is SNMP\_COL\_ROW (described below)
+* name -- mandatory if type is SNMP_VARIABLE (string)
+* oid -- mandatory if type is SNMP_OID (list of integers)
+* cr -- mandatory if type is SNMP_COL_ROW (described below)
-When type is SNMP\_COL\_ROW the cr argument must be provided. It is built up as a 2-tuple like this: tuple(string, list(int)).
+When type is SNMP_COL_ROW the cr argument must be provided. It is built up
+as a 2-tuple like this: tuple(string, list(int)).
The first element of the 2-tuple is the column name.
@@ -1495,15 +1568,18 @@ the SnmpVarbind type
### _class_ **TagValue**
-This type represents the c-type confd\_tag\_value\_t.
+This type represents the c-type confd_tag_value_t.
-In addition to the 'ns' and 'tag' attributes there is an additional attribute 'v' which containes the Value object.
+In addition to the 'ns' and 'tag' attributes there is an additional
+attribute 'v' which containes the Value object.
The contructor for this type has the following signature:
TagValue(xmltag, v, tag, ns) -> object
-There are two ways to contruct this object. The first one requires that both xmltag and v are specified. The second one requires that both tag and ns are specified.
+There are two ways to contruct this object. The first one requires that both
+xmltag and v are specified. The second one requires that both tag and ns are
+specified.
Keyword arguments:
@@ -1532,18 +1608,20 @@ tag hash
### _class_ **TransCtxRef**
-This type represents the c-type struct confd\_trans\_ctx.
+This type represents the c-type struct confd_trans_ctx.
Available attributes:
* fd -- worker socket (int)
* th -- transaction handle (int)
-* secondary\_index -- secondary index number for list traversal (int)
+* secondary_index -- secondary index number for list traversal (int)
* username -- from user session (string) DEPRECATED, see uinfo
* context -- from user session (string) DEPRECATED, see uinfo
* uinfo -- user session (UserInfo)
-* accumulated -- if the data provider is using the accumulate functionality this attribute will contain the first dp.TrItemRef object in the linked list, otherwise if will be None
-* traversal\_id -- unique id for the get\_next\* invocation
+* accumulated -- if the data provider is using the accumulate functionality
+ this attribute will contain the first dp.TrItemRef object
+ in the linked list, otherwise if will be None
+* traversal_id -- unique id for the get_next* invocation
TransCtxRef cannot be directly instantiated from Python.
@@ -1553,7 +1631,7 @@ _None_
### _class_ **UserInfo**
-This type represents the c-type struct confd\_user\_info.
+This type represents the c-type struct confd_user_info.
UserInfo cannot be directly instantiated from Python.
@@ -1563,7 +1641,7 @@ Members:
actx_thandle
-actx\_thandle -- action context transaction handle
+actx_thandle -- action context transaction handle
@@ -1579,7 +1657,7 @@ addr -- ip address (string)
af
-af -- address family AF\_INIT or AF\_INET6 (int)
+af -- address family AF_INIT or AF_INET6 (int)
@@ -1603,7 +1681,7 @@ context -- the context (string)
flags
-flags -- CONFD\_USESS\_FLAG\_... (int)
+flags -- CONFD_USESS_FLAG_... (int)
@@ -1643,7 +1721,7 @@ proto -- protocol (int)
snmp_v3_ctx
-snmp\_v3\_ctx -- SNMP context (string)
+snmp_v3_ctx -- SNMP context (string)
@@ -1665,38 +1743,44 @@ usid -- user session id (int)
### _class_ **Value**
-This type represents the c-type confd\_value\_t.
+This type represents the c-type confd_value_t.
The contructor for this type has the following signature:
Value(init, type) -> object
-If type is not provided it will be automatically set by inspecting the type of argument init according to this table:
+If type is not provided it will be automatically set by inspecting the type
+of argument init according to this table:
-| Python type | Value type |
-| ----------- | ---------- |
-| bool | C\_BOOL |
-| int | C\_INT32 |
-| long | C\_INT64 |
-| float | C\_DOUBLE |
-| string | C\_BUF |
+Python type | Value type
+-----------------|------------
+bool | C_BOOL
+int | C_INT32
+long | C_INT64
+float | C_DOUBLE
+string | C_BUF
-If any other type is provided for the init argument, the type will be set to C\_BUF and the value will be the string representation of init.
+If any other type is provided for the init argument, the type will be set to
+C_BUF and the value will be the string representation of init.
-For types C\_XMLTAG, C\_XMLBEGIN and C\_XMLEND the init argument must be a 2-tuple which specifies the ns and tag values like this: (ns, tag).
+For types C_XMLTAG, C_XMLBEGIN and C_XMLEND the init argument must be a
+2-tuple which specifies the ns and tag values like this: (ns, tag).
-For type C\_IDENTITYREF the init argument must be a 2-tuple which specifies the ns and id values like this: (ns, id).
+For type C_IDENTITYREF the init argument must be a
+2-tuple which specifies the ns and id values like this: (ns, id).
-For types C\_IPV4, C\_IPV6, C\_DATETIME, C\_DATE, C\_TIME, C\_DURATION, C\_OID, C\_IPV4PREFIX and C\_IPV6PREFIX, the init argument must be a string.
+For types C_IPV4, C_IPV6, C_DATETIME, C_DATE, C_TIME, C_DURATION, C_OID,
+C_IPV4PREFIX and C_IPV6PREFIX, the init argument must be a string.
-For type C\_DECIMAL64 the init argument must be a string, or a 2-tuple which specifies value and fraction digits like this: (value, fraction\_digits).
+For type C_DECIMAL64 the init argument must be a string, or a 2-tuple which
+specifies value and fraction digits like this: (value, fraction_digits).
-For type C\_BINARY the init argument must be a bytes instance.
+For type C_BINARY the init argument must be a bytes instance.
Keyword arguments:
* init -- the initial value
-* type -- type (optional, see confd\_types(3))
+* type -- type (optional, see confd_types(3))
Members:
@@ -1710,7 +1794,8 @@ Method:
as_decimal64() -> Tuple[int, int]
```
-Returns a tuple containing (value, fraction\_digits) if this value is of type C\_DECIMAL64.
+Returns a tuple containing (value, fraction_digits) if this value is of
+type C_DECIMAL64.
@@ -1724,7 +1809,7 @@ Method:
as_list() -> list
```
-Returns a list of Value's if this value is of type C\_LIST.
+Returns a list of Value's if this value is of type C_LIST.
@@ -1738,11 +1823,15 @@ Method:
as_pyval() -> Any
```
-Tries to convert a Value to a native Python type. If possible the object returned will be of the same type as used when initializing a Value object. If the type cannot be represented as something useful in Python a string will be returned. Note that not all Value types are supported.
+Tries to convert a Value to a native Python type. If possible the object
+returned will be of the same type as used when initializing a Value object.
+If the type cannot be represented as something useful in Python a string
+will be returned. Note that not all Value types are supported.
-E.g. assuming you already have a value object, this should be possible in most cases:
+E.g. assuming you already have a value object, this should be possible
+in most cases:
-newvalue = Value(value.as\_pyval(), value.confd\_type())
+ newvalue = Value(value.as_pyval(), value.confd_type())
@@ -1756,7 +1845,7 @@ Method:
as_xmltag() -> XmlTag
```
-Returns a XmlTag instance if this value is of type C\_XMLTAG.
+Returns a XmlTag instance if this value is of type C_XMLTAG.
@@ -1799,12 +1888,14 @@ str2val(value, schema_type) -> Value
(class method)
```
-Create and return a Value from a string. The schema\_type argument must be either a 2-tuple with namespace and keypath, a CsNode instance or a CsType instance.
+Create and return a Value from a string. The schema_type argument must be
+either a 2-tuple with namespace and keypath, a CsNode instance or a CsType
+instance.
Keyword arguments:
* value -- string value
-* schema\_type -- either (ns, keypath), a CsNode or a CsType
+* schema_type -- either (ns, keypath), a CsNode or a CsType
@@ -1818,17 +1909,19 @@ Method:
val2str(schema_type) -> str
```
-Return a string representation of Value. The schema\_type argument must be either a 2-tuple with namespace and keypath, a CsNode instance or a CsType instance.
+Return a string representation of Value. The schema_type argument must be
+either a 2-tuple with namespace and keypath, a CsNode instance or a CsType
+instance.
Keyword arguments:
-* schema\_type -- either (ns, keypath), a CsNode or a CsType
+* schema_type -- either (ns, keypath), a CsNode or a CsType
### _class_ **XmlTag**
-This type represent the c-type struct xml\_tag.
+This type represent the c-type struct xml_tag.
The contructor for this type has the following signature:
@@ -1984,6 +2077,7 @@ ERR_BADSTATE = 17
ERR_BADTYPE = 5
ERR_BAD_CONFIG = 36
ERR_BAD_KEYREF = 14
+ERR_BAD_PAYLOAD = 72
ERR_CLI_CMD = 59
ERR_DATA_MISSING = 58
ERR_EOF = 45
@@ -2162,6 +2256,18 @@ TRACE = 2
TRANSACTION = 5
TRANS_CB_FLAG_FILTERED = 1
TRUE = 1
+TYPE_BITS = 3
+TYPE_DECIMAL64 = 4
+TYPE_DISPLAY_HINT = 10
+TYPE_ENUM = 1
+TYPE_IDENTITY = 11
+TYPE_IDREF = 2
+TYPE_LIST = 6
+TYPE_LIST_RESTR = 9
+TYPE_NONE = 0
+TYPE_NUMBER = 7
+TYPE_STRING = 8
+TYPE_UNION = 5
USESS_FLAG_FORWARD = 1
USESS_FLAG_HAS_IDENTIFICATION = 2
USESS_FLAG_HAS_OPAQUE = 4
diff --git a/developer-reference/pyapi/index.md b/developer-reference/pyapi/index.md
new file mode 100644
index 00000000..338d260f
--- /dev/null
+++ b/developer-reference/pyapi/index.md
@@ -0,0 +1,25 @@
+# Python API Reference
+
+Documentation for Python modules, generated from module source:
+
+- [ncs](ncs.md): NCS Python high level module.
+- [ncs.alarm](ncs.alarm.md): NCS Alarm Manager module.
+- [ncs.application](ncs.application.md): Module for building NCS applications.
+- [ncs.cdb](ncs.cdb.md): CDB high level module.
+- [ncs.dp](ncs.dp.md): Callback module for connecting data providers to ConfD/NCS.
+- [ncs.experimental](ncs.experimental.md): Experimental stuff.
+- [ncs.log](ncs.log.md): This module provides some logging utilities.
+- [ncs.maagic](ncs.maagic.md): Confd/NCS data access module.
+- [ncs.maapi](ncs.maapi.md): MAAPI high level module.
+- [ncs.progress](ncs.progress.md): MAAPI progress trace high level module.
+- [ncs.service_log](ncs.service_log.md): This module provides service logging
+- [ncs.template](ncs.template.md): This module implements classes to simplify template processing.
+- [ncs.util](ncs.util.md): Utility module, low level abstrations
+- [_ncs](_ncs.md): NCS Python low level module.
+- [_ncs.cdb](_ncs.cdb.md): Low level module for connecting to NCS built-in XML database (CDB).
+- [_ncs.dp](_ncs.dp.md): Low level callback module for connecting data providers to NCS.
+- [_ncs.error](_ncs.error.md): This module defines new NCS Python API exception classes.
+- [_ncs.events](_ncs.events.md): Low level module for subscribing to NCS event notifications.
+- [_ncs.ha](_ncs.ha.md): Low level module for connecting to NCS HA subsystem.
+- [_ncs.maapi](_ncs.maapi.md): Low level module for connecting to NCS with a read/write interface
+inside transactions.
diff --git a/developer-reference/pyapi/ncs.cdb.md b/developer-reference/pyapi/ncs.cdb.md
index 22c241a2..bc09c919 100644
--- a/developer-reference/pyapi/ncs.cdb.md
+++ b/developer-reference/pyapi/ncs.cdb.md
@@ -135,7 +135,7 @@ called terminates -- either normally or through an unhandled exception
or until the optional timeout occurs.
When the timeout argument is present and not None, it should be a
-floating point number specifying a timeout for the operation in seconds
+floating-point number specifying a timeout for the operation in seconds
(or fractions thereof). As join() always returns None, you must call
is_alive() after join() to decide whether a timeout happened -- if the
thread is still alive, the join() call timed out.
@@ -489,7 +489,7 @@ called terminates -- either normally or through an unhandled exception
or until the optional timeout occurs.
When the timeout argument is present and not None, it should be a
-floating point number specifying a timeout for the operation in seconds
+floating-point number specifying a timeout for the operation in seconds
(or fractions thereof). As join() always returns None, you must call
is_alive() after join() to decide whether a timeout happened -- if the
thread is still alive, the join() call timed out.
@@ -810,7 +810,7 @@ called terminates -- either normally or through an unhandled exception
or until the optional timeout occurs.
When the timeout argument is present and not None, it should be a
-floating point number specifying a timeout for the operation in seconds
+floating-point number specifying a timeout for the operation in seconds
(or fractions thereof). As join() always returns None, you must call
is_alive() after join() to decide whether a timeout happened -- if the
thread is still alive, the join() call timed out.
diff --git a/developer-reference/pyapi/ncs.dp.md b/developer-reference/pyapi/ncs.dp.md
index 99100623..5591f273 100644
--- a/developer-reference/pyapi/ncs.dp.md
+++ b/developer-reference/pyapi/ncs.dp.md
@@ -363,7 +363,7 @@ called terminates -- either normally or through an unhandled exception
or until the optional timeout occurs.
When the timeout argument is present and not None, it should be a
-floating point number specifying a timeout for the operation in seconds
+floating-point number specifying a timeout for the operation in seconds
(or fractions thereof). As join() always returns None, you must call
is_alive() after join() to decide whether a timeout happened -- if the
thread is still alive, the join() call timed out.
@@ -1232,7 +1232,6 @@ NCS_UNKNOWN_NED_IDS_COMPLIANCE_TEMPLATE = 124
NCS_UNKNOWN_NED_ID_DEVICE_TEMPLATE = 106
NCS_XML_PARSE = 11
NCS_YANGLIB_NO_SCHEMA_FOR_RUNNING = 114
-OPERATION_CASE_EXISTS = 13
PATCH_FLAG_AAA_CHECKED = 8
PATCH_FLAG_BUFFER_DAMPENED = 2
PATCH_FLAG_FILTER = 4
diff --git a/developer-reference/pyapi/ncs.maagic.md b/developer-reference/pyapi/ncs.maagic.md
index 26964e75..67c4506a 100644
--- a/developer-reference/pyapi/ncs.maagic.md
+++ b/developer-reference/pyapi/ncs.maagic.md
@@ -6,6 +6,21 @@ This module implements classes and function for easy access to the data store.
There is no need to manually instantiate any of the classes herein. The only
functions that should be used are cd(), get_node() and get_root().
+Node Comparison in NSO 6.1.17+ (May 2025-):
+------------------------------------------
+
+In NSO 6.1.17, 6.2.12, 6.3.9, 6.4.5, 6.5.1 and 6.6 node object caching changed,
+due to excessive memory usage. This change broke services that use
+node == comparisons. Use get_node_path() for reliable node identification:
+
+ from ncs.maagic import get_node_path
+
+ # Instead of: device1 == device2
+ # Use: get_node_path(device1) == get_node_path(device2)
+
+ # Dictionary keys:
+ device_cache = {get_node_path(device): data}
+
## Functions
### as_pyval
@@ -137,6 +152,27 @@ Example use:
node = ncs.maagic.get_node(t, '/ncs:devices/device{ce0}')
+### get_node_path
+
+```python
+get_node_path(node)
+```
+
+Get the keypath of a maagic node.
+
+Provides reliable node identification across NSO versions where object
+caching behavior has changed.
+
+Arguments:
+* node -- the maagic node (maagic.Node)
+
+Returns:
+* keypath of the node as a string (str or None)
+
+Example:
+ if get_node_path(device1) == get_node_path(device2):
+ print("Same device")
+
### get_root
```python
diff --git a/developer-reference/pyapi/ncs.maapi.md b/developer-reference/pyapi/ncs.maapi.md
index a355f86f..a2778e68 100644
--- a/developer-reference/pyapi/ncs.maapi.md
+++ b/developer-reference/pyapi/ncs.maapi.md
@@ -1584,6 +1584,41 @@ Returns:
+get_template_variables(...)
+
+Method:
+
+```python
+get_template_variables(self, name, type_enum)
+```
+
+Get template variables for specific types.
+
+
+
+
+
+get_trans_mode(...)
+
+Method:
+
+```python
+get_trans_mode(self, th)
+```
+
+Get transaction mode for a transaction handle.
+
+Arguments:
+* th -- a transaction handle.
+
+Returns:
+
+* Either READ or READ_WRITE flag (ncs) or -1 (no transaction).
+
+
+
+
+
ip
_Readonly property_
@@ -2395,6 +2430,68 @@ Close the user session.
+### _class_ **TemplateTypes**
+
+Enumeration for template types:
+DEVICE_TEMPLATE = 0
+SERVICE_TEMPLATE = 1
+COMPLIANCE_TEMPLATE = 2
+
+```python
+TemplateTypes(*values)
+```
+
+Members:
+
+
+
+COMPLIANCE_TEMPLATE
+
+```python
+COMPLIANCE_TEMPLATE = 2
+```
+
+
+
+
+
+
+DEVICE_TEMPLATE
+
+```python
+DEVICE_TEMPLATE = 0
+```
+
+
+
+
+
+
+SERVICE_TEMPLATE
+
+```python
+SERVICE_TEMPLATE = 1
+```
+
+
+
+
+
+
+name
+
+The name of the Enum member.
+
+
+
+
+
+value
+
+The value of the Enum member.
+
+
+
### _class_ **Transaction**
Class that corresponds to a single MAAPI transaction.
diff --git a/developer-reference/pyapi/ncs.md b/developer-reference/pyapi/ncs.md
index 81e4b1b5..c2ff4782 100644
--- a/developer-reference/pyapi/ncs.md
+++ b/developer-reference/pyapi/ncs.md
@@ -172,6 +172,7 @@ ERR_BADSTATE = 17
ERR_BADTYPE = 5
ERR_BAD_CONFIG = 36
ERR_BAD_KEYREF = 14
+ERR_BAD_PAYLOAD = 72
ERR_CLI_CMD = 59
ERR_DATA_MISSING = 58
ERR_EOF = 45
@@ -350,6 +351,18 @@ TRACE = 2
TRANSACTION = 5
TRANS_CB_FLAG_FILTERED = 1
TRUE = 1
+TYPE_BITS = 3
+TYPE_DECIMAL64 = 4
+TYPE_DISPLAY_HINT = 10
+TYPE_ENUM = 1
+TYPE_IDENTITY = 11
+TYPE_IDREF = 2
+TYPE_LIST = 6
+TYPE_LIST_RESTR = 9
+TYPE_NONE = 0
+TYPE_NUMBER = 7
+TYPE_STRING = 8
+TYPE_UNION = 5
USESS_FLAG_FORWARD = 1
USESS_FLAG_HAS_IDENTIFICATION = 2
USESS_FLAG_HAS_OPAQUE = 4
diff --git a/development/advanced-development/developing-neds/cli-ned-development.md b/development/advanced-development/developing-neds/cli-ned-development.md
index a15c643b..33a7cb1a 100644
--- a/development/advanced-development/developing-neds/cli-ned-development.md
+++ b/development/advanced-development/developing-neds/cli-ned-development.md
@@ -6,7 +6,7 @@ description: Create CLI NEDs.
The CLI NED is a model-driven way to CLI script towards all Cisco-like devices. Some Java code is necessary for handling the corner cases a human-to-machine interface presents.
-See the [examples.ncs/device-manager/cli-ned](https://github.com/NSO-developer/nso-examples/tree/6.5/device-management/cli-ned) for an example of a Java implementation serving any YANG models, including those that come with the example.
+See the [examples.ncs/device-manager/cli-ned](https://github.com/NSO-developer/nso-examples/tree/6.6/device-management/cli-ned) for an example of a Java implementation serving any YANG models, including those that come with the example.
The NSO CLI NED southbound of NSO shares a Cisco-style CLI engine with the northbound NSO CLI interface, and the CLI engine can thus run in both directions, producing CLI southbound and interpreting CLI data coming from southbound while presenting a CLI interface northbound. It is helpful to keep this in mind when learning and working with CLI NEDs.
diff --git a/development/advanced-development/developing-neds/generic-ned-development.md b/development/advanced-development/developing-neds/generic-ned-development.md
index 142e6189..6a4a394f 100644
--- a/development/advanced-development/developing-neds/generic-ned-development.md
+++ b/development/advanced-development/developing-neds/generic-ned-development.md
@@ -35,7 +35,7 @@ state admin-state unlocked
...
```
-The [examples.ncs/device-management/](https://github.com/NSO-developer/nso-examples/tree/6.5/device-management/xmlrpc-device)[generic-xmlrpc-ned](https://github.com/NSO-developer/nso-examples/tree/6.5/device-management/generic-xmlrpc-ned) example in the NSO examples collection implements a generic NED that speaks XML-RPC to 3 HTTP servers. The HTTP servers run the Apache XML-RPC server code and the NED code manipulates the 3 HTTP servers using a number of predefined XML RPC calls.
+The [examples.ncs/device-management/](https://github.com/NSO-developer/nso-examples/tree/6.6/device-management/xmlrpc-device)[generic-xmlrpc-ned](https://github.com/NSO-developer/nso-examples/tree/6.6/device-management/generic-xmlrpc-ned) example in the NSO examples collection implements a generic NED that speaks XML-RPC to 3 HTTP servers. The HTTP servers run the Apache XML-RPC server code and the NED code manipulates the 3 HTTP servers using a number of predefined XML RPC calls.
A good starting point when we wish to implement a new generic NED is the `ncs-make-package --generic-ned-skeleton ...` command, which is used to generate a skeleton package for a generic NED.
@@ -83,7 +83,7 @@ Often a useful technique with generic NEDs can be to write a pyang plugin to gen
Pyang is an extensible and open-source YANG parser (written by Tail-f) available at `http://www.yang-central.org`. pyang is also part of the NSO release. A number of plugins are shipped in the NSO release, for example `$NCS_DIR/lib/pyang/pyang/plugins/tree.py` is a good plugin to start with if we wish to write our own plugin.
-The [examples.ncs/device-management/](https://github.com/NSO-developer/nso-examples/tree/6.5/device-management/xmlrpc-device)[generic-xmlrpc-ned](https://github.com/NSO-developer/nso-examples/tree/6.5/device-management/generic-xmlrpc-ned) example is a good example to start with if we wish to write a generic NED. It manages a set of devices over the XML-RPC protocol. In this example, we have:
+The [examples.ncs/device-management/](https://github.com/NSO-developer/nso-examples/tree/6.6/device-management/xmlrpc-device)[generic-xmlrpc-ned](https://github.com/NSO-developer/nso-examples/tree/6.6/device-management/generic-xmlrpc-ned) example is a good example to start with if we wish to write a generic NED. It manages a set of devices over the XML-RPC protocol. In this example, we have:
* Defined a fictitious YANG model for the device.
* Implemented an XML-RPC server exporting a set of RPCs to manipulate that fictitious data model. The XML-RPC server runs the Apache `org.apache.xmlrpc.server.XmlRpcServer` Java package.
@@ -161,7 +161,7 @@ A device we wish to manage using a NED usually has not just configuration data t
The commands on the device we wish to be able to invoke from NSO must be modeled as actions. We model this as actions and compile it using a special `ncsc` command to compile NED data models that do not directly relate to configuration data on the device.
-The [examples.ncs/device-management/](https://github.com/NSO-developer/nso-examples/tree/6.5/device-management/xmlrpc-device)[generic-xmlrpc-ned](https://github.com/NSO-developer/nso-examples/tree/6.5/device-management/generic-xmlrpc-ned) example managed device, a fictitious XML-RPC device, contains a YANG snippet:
+The [examples.ncs/device-management/](https://github.com/NSO-developer/nso-examples/tree/6.6/device-management/xmlrpc-device)[generic-xmlrpc-ned](https://github.com/NSO-developer/nso-examples/tree/6.6/device-management/generic-xmlrpc-ned) example managed device, a fictitious XML-RPC device, contains a YANG snippet:
```yang
container commands {
diff --git a/development/advanced-development/developing-neds/ned-upgrades-and-migration.md b/development/advanced-development/developing-neds/ned-upgrades-and-migration.md
index 5b2b7f4b..2eccc65a 100644
--- a/development/advanced-development/developing-neds/ned-upgrades-and-migration.md
+++ b/development/advanced-development/developing-neds/ned-upgrades-and-migration.md
@@ -16,6 +16,6 @@ These features aim to lower the barrier of upgrading NEDs and significantly redu
By using the `/ncs:devices/device/migrate` action, you can change the NED major/minor version of a device. The action migrates all configuration and service meta-data. The action can also be executed in parallel on a device group or on all devices matching a NED identity. The procedure for migrating devices is further described in [NED Migration](../../../administration/management/ned-administration.md#sec.ned\_migration).
-Additionally, the example [examples.ncs/device-management/ned-migration](https://github.com/NSO-developer/nso-examples/tree/6.5/device-management/ned-migration) in the NSO examples collection illustrates how to migrate devices between different NED versions using the `migrate` action.
+Additionally, the example [examples.ncs/device-management/ned-migration](https://github.com/NSO-developer/nso-examples/tree/6.6/device-management/ned-migration) in the NSO examples collection illustrates how to migrate devices between different NED versions using the `migrate` action.
What makes it particularly useful to a service developer is that the action reports what paths have been modified and the service instances affected by those changes. This information can then be used to prepare the service code to handle the new NED version. If the `verbose` option is used, all service instances are reported instead of just the service points. If the `dry-run` option is used, the action simply reports what it would do. This gives you the chance to analyze before any actual change is performed.
diff --git a/development/advanced-development/developing-neds/netconf-ned-development.md b/development/advanced-development/developing-neds/netconf-ned-development.md
index a942609e..439cb2ec 100644
--- a/development/advanced-development/developing-neds/netconf-ned-development.md
+++ b/development/advanced-development/developing-neds/netconf-ned-development.md
@@ -17,7 +17,7 @@ Creating a NETCONF NED that uses the built-in NSO NETCONF client can be a pleasa
Before NSO can manage a NETCONF-capable device, a corresponding NETCONF NED needs to be loaded. While no code needs to be written for such NED, it must contain YANG data models for this kind of device. While in some cases, the YANG models may be provided by the device's vendor, devices that implement RFC 6022 YANG Module for NETCONF Monitoring can provide their YANG models using the functionality described in this RFC.
-The NSO example under [examples.ncs/device-management/netconf-ned](https://github.com/NSO-developer/nso-examples/tree/6.5/device-management/netconf-ned) implements two shell scripts that use different tools to build a NETCONF NED from a simulated hardware chassis system controller device.
+The NSO example under [examples.ncs/device-management/netconf-ned](https://github.com/NSO-developer/nso-examples/tree/6.6/device-management/netconf-ned) implements two shell scripts that use different tools to build a NETCONF NED from a simulated hardware chassis system controller device.
### **The `netconf-console` and `ncs-make-package` Tools**
@@ -35,7 +35,7 @@ The `demo_nb.sh` script in the `netconf-ned` example uses the NSO CLI NETCONF NE
## Using the **`netconf-console`** and **`ncs-make-package`** Combination
-For a demo of the steps below, see the README in the [examples.ncs/device-management/netconf-ned](https://github.com/NSO-developer/nso-examples/tree/6.5/device-management/netconf-ned) example and run the demo.sh script.
+For a demo of the steps below, see the README in the [examples.ncs/device-management/netconf-ned](https://github.com/NSO-developer/nso-examples/tree/6.6/device-management/netconf-ned) example and run the demo.sh script.
### **Make the Device YANG Data Models Available to NSO**
@@ -181,11 +181,11 @@ fetch-result {
result true
```
-NSO can now configure the device, state data can be read, actions can be executed, and notifications can be received. See the [examples.ncs/device-management/netconf-ned](https://github.com/NSO-developer/nso-examples/tree/6.5/device-management/netconf-ned) `demo.sh` example script for a demo.
+NSO can now configure the device, state data can be read, actions can be executed, and notifications can be received. See the [examples.ncs/device-management/netconf-ned](https://github.com/NSO-developer/nso-examples/tree/6.6/device-management/netconf-ned) `demo.sh` example script for a demo.
## Using the NETCONF NED Builder Tool
-For a demo of the steps below, see README in the [examples.ncs/device-management/netconf-ned](https://github.com/NSO-developer/nso-examples/tree/6.5/device-management/netconf-ned) example and run the `demo_nb.sh` script.
+For a demo of the steps below, see README in the [examples.ncs/device-management/netconf-ned](https://github.com/NSO-developer/nso-examples/tree/6.6/device-management/netconf-ned) example and run the `demo_nb.sh` script.
### **Configure the Device Connection**
@@ -623,7 +623,7 @@ devices device hw0
...
```
-NSO can now configure the device, state data can be read, actions can be executed, and notifications can be received. See the [examples.ncs/device-management/netconf-ned](https://github.com/NSO-developer/nso-examples/tree/6.5/device-management/netconf-ned) `demo_nb.sh` example script for a demo.
+NSO can now configure the device, state data can be read, actions can be executed, and notifications can be received. See the [examples.ncs/device-management/netconf-ned](https://github.com/NSO-developer/nso-examples/tree/6.6/device-management/netconf-ned) `demo_nb.sh` example script for a demo.
### **Remove a NED from NSO**
diff --git a/development/advanced-development/developing-neds/snmp-ned.md b/development/advanced-development/developing-neds/snmp-ned.md
index 5b169cb9..71037dc6 100644
--- a/development/advanced-development/developing-neds/snmp-ned.md
+++ b/development/advanced-development/developing-neds/snmp-ned.md
@@ -26,7 +26,7 @@ To add a device, the following steps need to be followed. They are described in
## Compiling and Loading MIBs
-(See the `Makefile` in the [examples.ncs/device-management/snmp-ned](https://github.com/NSO-developer/nso-examples/tree/6.5/device-management/snmp-ned) example under `packages/ex-snmp-ned/src/Makefile`, for an example of the below description.) Make sure that you have all MIBs available, including import dependencies, and that they contain no errors.
+(See the `Makefile` in the [examples.ncs/device-management/snmp-ned](https://github.com/NSO-developer/nso-examples/tree/6.6/device-management/snmp-ned) example under `packages/ex-snmp-ned/src/Makefile`, for an example of the below description.) Make sure that you have all MIBs available, including import dependencies, and that they contain no errors.
The `ncsc --ncs-compile-mib-bundle` compiler is used to compile MIBs and MIB annotation files into NSO load files. Assuming a directory with input MIB files (and optional MIB annotation files) exist, the following command compiles all the MIBs in `device-models` and writes the output to `ncs-device-model-dir`.
@@ -139,7 +139,7 @@ Some SNMP agents require a certain order of row deletions and creations. By defa
Sometimes rows in some SNMP agents cannot be modified once created. Such rows can be marked with the annotation `ned-recreate-when-modified`. This makes the SNMP NED to first delete the row, and then immediately recreate it with the new values.
-A good starting point for understanding annotations is to look at the example in the [examples.ncs/device-management/snmp-ned](https://github.com/NSO-developer/nso-examples/tree/6.5/device-management/snmp-ned) directory. The BASIC-CONFIG-MIB mib has a table where rows can be modified if the `bscActAdminState` is set to locked. To have NSO do this automatically when modifying entries rather than leaving it to users an annotation file can be created. See the `BASIC-CONFIG-MIB.miba` which contains the following:
+A good starting point for understanding annotations is to look at the example in the [examples.ncs/device-management/snmp-ned](https://github.com/NSO-developer/nso-examples/tree/6.6/device-management/snmp-ned) directory. The BASIC-CONFIG-MIB mib has a table where rows can be modified if the `bscActAdminState` is set to locked. To have NSO do this automatically when modifying entries rather than leaving it to users an annotation file can be created. See the `BASIC-CONFIG-MIB.miba` which contains the following:
```
## NCS Annotation module for BASIC-CONFIG-MIB
@@ -158,7 +158,7 @@ Make sure that the MIB annotation file is put into the directory where all the M
NSO can manage SNMP devices within transactions, a transaction can span Cisco devices, NETCONF devices, and SNMP devices. If a transaction fails NSO will generate the reverse operation to the SNMP device.
-The basic features of the SNMP will be illustrated below by using the [examples.ncs/device-management/snmp-ned](https://github.com/NSO-developer/nso-examples/tree/6.5/device-management/snmp-ned) example. First, try to connect to all SNMP devices:
+The basic features of the SNMP will be illustrated below by using the [examples.ncs/device-management/snmp-ned](https://github.com/NSO-developer/nso-examples/tree/6.6/device-management/snmp-ned) example. First, try to connect to all SNMP devices:
```cli
admin@ncs# devices connect
diff --git a/development/advanced-development/developing-packages.md b/development/advanced-development/developing-packages.md
index ab1946fc..1ca91bfb 100644
--- a/development/advanced-development/developing-packages.md
+++ b/development/advanced-development/developing-packages.md
@@ -123,7 +123,7 @@ The `netsim` directory contains three files:
6. `%NAME%` - for the name of the ConfD instance.
7. `%COUNTER%` - for the number of the ConfD instance
* The `Makefile` should compile the YANG files so that ConfD can run them. The `Makefile` should also have an `install` target that installs all files required for ConfD to run one instance of a simulated network element. This is typically all `fxs` files.
-* An optional `start.sh` file where additional programs can be started. A good example of a package where the netsim component contains some additional C programs is the `webserver` package in [examples.ncs/service-management/website-service](https://github.com/NSO-developer/nso-examples/tree/6.5/service-management/website-service) example.
+* An optional `start.sh` file where additional programs can be started. A good example of a package where the netsim component contains some additional C programs is the `webserver` package in [examples.ncs/service-management/website-service](https://github.com/NSO-developer/nso-examples/tree/6.6/service-management/website-service) example.
Remember the picture of the network we wish to work with, there the routers, PE and CE, have an IP address and some additional data. So far here, we have generated a simulated network with YANG models. The routers in our simulated network have no data in them, we can log in to one of the routers to verify that:
@@ -138,7 +138,7 @@ admin@zoe> exit
The ConfD devices in our simulated network all have a Juniper CLI engine, thus we can, using the command `ncs-netsim cli [devicename]`, log in to an individual router.
-To achieve this, we need to have some additional XML initializing files for the ConfD instances. It is the responsibility of the `install` target in the netsim Makefile to ensure that each ConfD instance gets initialized with the proper init data. In the NSO example collection, the [examples.ncs/service-management/mpls-vpn-java](https://github.com/NSO-developer/nso-examples/tree/6.5/service-management/mpls-vpn-java) and [examples.ncs/service-management/mpls-vpn-python](https://github.com/NSO-developer/nso-examples/tree/6.5/service-management/mpls-vpn-python) examples contain the two above-mentioned PE and CE packages but modified, so that the network elements in the simulated network get initialized properly.
+To achieve this, we need to have some additional XML initializing files for the ConfD instances. It is the responsibility of the `install` target in the netsim Makefile to ensure that each ConfD instance gets initialized with the proper init data. In the NSO example collection, the [examples.ncs/service-management/mpls-vpn-java](https://github.com/NSO-developer/nso-examples/tree/6.6/service-management/mpls-vpn-java) and [examples.ncs/service-management/mpls-vpn-python](https://github.com/NSO-developer/nso-examples/tree/6.6/service-management/mpls-vpn-python) examples contain the two above-mentioned PE and CE packages but modified, so that the network elements in the simulated network get initialized properly.
If we run that example in the NSO example collection we see:
@@ -202,7 +202,7 @@ With the scripting mechanism, an end-user can add new functionality to NSO in a
Scripts defined in an NSO package work pretty much as system-level scripts configured with the `/ncs-config/scripts/dir` configuration parameter. The difference is that the location of the scripts is predefined. The scripts directory must be named `scripts` and must be located in the top directory of the package.
-In this complete example [examples.ncs/sdk-api/scripting](https://github.com/NSO-developer/nso-examples/tree/6.5/sdk-api/scripting), there is a `README` file and a simple post-commit script `packages/scripting/scripts/post-commit/show_diff.sh` as well as a simple command script `packages/scripting/scripts/command/echo.sh`.
+In this complete example [examples.ncs/sdk-api/scripting](https://github.com/NSO-developer/nso-examples/tree/6.6/sdk-api/scripting), there is a `README` file and a simple post-commit script `packages/scripting/scripts/post-commit/show_diff.sh` as well as a simple command script `packages/scripting/scripts/command/echo.sh`.
## Creating a Service Package
@@ -538,7 +538,7 @@ In debugging and error reporting, these root cause messages can be valuable to u
* `verbose`: Show all messages for the chain of cause exceptions, if any.
* `trace`: Show messages for the chain of cause exceptions with exception class and the trace for the bottom root cause.
-Here is an example of how this can be used. In the [examples.ncs/service-management/website-service](https://github.com/NSO-developer/nso-examples/tree/6.5/service-management/website-service) example, we try to create a service without the necessary pre-preparations:
+Here is an example of how this can be used. In the [examples.ncs/service-management/website-service](https://github.com/NSO-developer/nso-examples/tree/6.6/service-management/website-service) example, we try to create a service without the necessary pre-preparations:
{% code title="Example: Setting Error Message Verbosity" %}
```cli
diff --git a/development/advanced-development/developing-services/service-development-using-java.md b/development/advanced-development/developing-services/service-development-using-java.md
index 5dbf4d95..6ae16d74 100644
--- a/development/advanced-development/developing-services/service-development-using-java.md
+++ b/development/advanced-development/developing-services/service-development-using-java.md
@@ -698,7 +698,7 @@ The steps to build the solution described in this section are:
## Layer 3 MPLS VPN Service
-This service shows a more elaborate service mapping. It is based on the [examples.ncs/service-management/mpls-vpn-java](https://github.com/NSO-developer/nso-examples/tree/6.5/service-management/mpls-vpn-java) example.
+This service shows a more elaborate service mapping. It is based on the [examples.ncs/service-management/mpls-vpn-java](https://github.com/NSO-developer/nso-examples/tree/6.6/service-management/mpls-vpn-java) example.
MPLS VPNs are a type of Virtual Private Network (VPN) that achieves segmentation of network traffic using Multiprotocol Label Switching (MPLS), often found in Service Provider (SP) networks. The Layer 3 variant uses BGP to connect and distribute routes between sites of the VPN.
@@ -751,7 +751,7 @@ The information needed to sort out what PE router a CE router is connected to as
### Creating a Multi-Vendor Service
-This section describes the creation of an MPLS L3VPN service in a multi-vendor environment by applying the concepts described above. The example discussed can be found in [examples.ncs/service-management/mpls-vpn-java](https://github.com/NSO-developer/nso-examples/tree/6.5/service-management/mpls-vpn-java). The example network consists of Cisco ASR 9k and Juniper core routers (P and PE) and Cisco IOS-based CE routers.
+This section describes the creation of an MPLS L3VPN service in a multi-vendor environment by applying the concepts described above. The example discussed can be found in [examples.ncs/service-management/mpls-vpn-java](https://github.com/NSO-developer/nso-examples/tree/6.6/service-management/mpls-vpn-java). The example network consists of Cisco ASR 9k and Juniper core routers (P and PE) and Cisco IOS-based CE routers.
The goal of the NSO service is to set up an MPLS Layer3 VPN on a number of CE router endpoints using BGP as the CE-PE routing protocol. Connectivity between the CE and PE routers is done through a Layer2 Ethernet access network, which is out of the scope of this service. In a real-world scenario, the access network could for example be handled by another service.
diff --git a/development/advanced-development/developing-services/services-deep-dive.md b/development/advanced-development/developing-services/services-deep-dive.md
index 02af8ab8..baae09f7 100644
--- a/development/advanced-development/developing-services/services-deep-dive.md
+++ b/development/advanced-development/developing-services/services-deep-dive.md
@@ -112,7 +112,7 @@ Location of the plan data if the service plan is used. See [Nano Services for St
-While not part of `ncs:service-data` as such, you may consider the `service-commit-queue-event` notification part of the core service interface. The notification provides information about the state of the service when the service uses the commit queue. As an example, an event-driven application uses this notification to find out when a service instance has been deployed to the devices. See the `showcase_rc.py` script in [examples.ncs/scaling-performance/perf-stack](https://github.com/NSO-developer/nso-examples/tree/6.5/scaling-performance/perf-stack) for sample Python code, leveraging the notification. See `tailf-ncs-services.yang` for the full definition of the notification.
+While not part of `ncs:service-data` as such, you may consider the `service-commit-queue-event` notification part of the core service interface. The notification provides information about the state of the service when the service uses the commit queue. As an example, an event-driven application uses this notification to find out when a service instance has been deployed to the devices. See the `showcase_rc.py` script in [examples.ncs/scaling-performance/perf-stack](https://github.com/NSO-developer/nso-examples/tree/6.6/scaling-performance/perf-stack) for sample Python code, leveraging the notification. See `tailf-ncs-services.yang` for the full definition of the notification.
NSO Service Manager is responsible for providing the functionality of the common service interface, requiring no additional user code. This interface is the same for classic and nano services, whereas nano services further extend the model.
@@ -232,7 +232,7 @@ The Java callbacks use the following function arguments:
* `service`: A NavuNode for the service instance.
* `opaque`: Opaque service properties, see [Persistent Opaque Data](services-deep-dive.md#ch_svcref.opaque).
-See [examples.ncs/service-management/iface-postmod-py](https://github.com/NSO-developer/nso-examples/tree/6.5/service-management/iface-postmod-py) and [examples.ncs/service-management/iface-postmod-java](https://github.com/NSO-developer/nso-examples/tree/6.5/service-management/iface-postmod-java) examples for a sample implementation of the post-modification callback.
+See [examples.ncs/service-management/iface-postmod-py](https://github.com/NSO-developer/nso-examples/tree/6.6/service-management/iface-postmod-py) and [examples.ncs/service-management/iface-postmod-java](https://github.com/NSO-developer/nso-examples/tree/6.6/service-management/iface-postmod-java) examples for a sample implementation of the post-modification callback.
Additionally, you may implement these callbacks with templates. Refer to [Service Callpoints and Templates](../../core-concepts/templates.md#ch_templates.servicepoint) for details.
@@ -288,7 +288,7 @@ Compared to pre- and post-modification callbacks, which also persist data outsid
```
{% endcode %}
-The [examples.ncs/service-management/iface-postmod-py](https://github.com/NSO-developer/nso-examples/tree/6.5/service-management/iface-postmod-py) and [examples.ncs/service-management/iface-postmod-java](https://github.com/NSO-developer/nso-examples/tree/6.5/service-management/iface-postmod-java) examples showcase the use of opaque properties.
+The [examples.ncs/service-management/iface-postmod-py](https://github.com/NSO-developer/nso-examples/tree/6.6/service-management/iface-postmod-py) and [examples.ncs/service-management/iface-postmod-java](https://github.com/NSO-developer/nso-examples/tree/6.6/service-management/iface-postmod-java) examples showcase the use of opaque properties.
## Defining Static Service Conflicts
@@ -326,7 +326,7 @@ Furthermore, containers and list items created using the `sharedCreate()` and `s
`backpointer` points back to the service instance that created the entity in the first place. This makes it possible to look at part of the configuration, say under `/devices` tree, and answer the question: which parts of the device configuration were created by which service?
-To see reference counting in action, start the [examples.ncs/service-management/implement-a-service/iface-v3](https://github.com/NSO-developer/nso-examples/tree/6.5/service-management/implement-a-service/iface-v3) example with `make demo` and configure a service instance.
+To see reference counting in action, start the [examples.ncs/service-management/implement-a-service/iface-v3](https://github.com/NSO-developer/nso-examples/tree/6.6/service-management/implement-a-service/iface-v3) example with `make demo` and configure a service instance.
```bash
admin@ncs(config)# iface instance1 device c1 interface 0/1 ip-address 10.1.2.3 cidr-netmask 28
@@ -411,7 +411,7 @@ Then you create a higher-level service, say a CFS, that configures another servi
```
{% endcode %}
-The preceding example references an existing `iface` service, such as the one in the [examples.ncs/service-management/implement-a-service/iface-v3](https://github.com/NSO-developer/nso-examples/tree/6.5/service-management/implement-a-service/iface-v3) example. The output shows hard-coded values but you can change those as you would for any other service.
+The preceding example references an existing `iface` service, such as the one in the [examples.ncs/service-management/implement-a-service/iface-v3](https://github.com/NSO-developer/nso-examples/tree/6.6/service-management/implement-a-service/iface-v3) example. The output shows hard-coded values but you can change those as you would for any other service.
In practice, you might find it beneficial to modularize your data model and potentially reuse parts in both, the lower- and higher-level service. This avoids duplication while still allowing you to directly expose some of the lower-level service functionality through the higher-level model.
@@ -777,7 +777,7 @@ This approach provides an excellent way to maintain an overview of services depl
To address this, we can nest the services within another list. By organizing all services under a common structure, we enable the ability to view and manage multiple service types for a device in a unified manner, providing a comprehensive overview with a single command.
-To illustrate this approach, we need to introduce another service type. Moving beyond the dummy example, let’s use a more realistic scenario: the [mpls-vpn-simple](https://github.com/NSO-developer/nso-examples/tree/6.5/service-management/mpls-vpn-simple) example. We'll refactor this service to adopt the stacked service approach while maintaining the existing customer-facing interface.
+To illustrate this approach, we need to introduce another service type. Moving beyond the dummy example, let’s use a more realistic scenario: the [mpls-vpn-simple](https://github.com/NSO-developer/nso-examples/tree/6.6/service-management/mpls-vpn-simple) example. We'll refactor this service to adopt the stacked service approach while maintaining the existing customer-facing interface.
After the refactor, the service will shift from provisioning multiple devices directly through a single instance to creating a separate service instance for each device, VPN, and endpoint, what we call resource-facing services. These resource-facing services will be structured so that all device-specific services are grouped under a node for each device.
@@ -986,7 +986,7 @@ You may also obtain some useful information by using the `debug service` commit
However, the service may also delete data implicitly, through `when` and `choice` statements in the YANG data model. If a `when` statement evaluates to false, the configuration tree below that node is deleted. Likewise, if a `case` is set in a `choice` statement, the previously set `case` is deleted. This has the same limitations as an explicit delete.
\
- To avoid these issues, create a separate service, that only handles deletion, and use it in the main service through the stacked service design (see [Stacked Services](services-deep-dive.md#ch_svcref.stacking)). This approach allows you to reference count the deletion operation and contains the effect of restoring deleted data through a small, rarely-changing helper service. See [examples.ncs/service-management/shared-delete](https://github.com/NSO-developer/nso-examples/tree/6.5/service-management/shared-delete) for an example.
+ To avoid these issues, create a separate service, that only handles deletion, and use it in the main service through the stacked service design (see [Stacked Services](services-deep-dive.md#ch_svcref.stacking)). This approach allows you to reference count the deletion operation and contains the effect of restoring deleted data through a small, rarely-changing helper service. See [examples.ncs/service-management/shared-delete](https://github.com/NSO-developer/nso-examples/tree/6.6/service-management/shared-delete) for an example.
\
Alternatively, you might consider pre- and post-modification callbacks for some specific cases.
@@ -1001,7 +1001,7 @@ You may also obtain some useful information by using the `debug service` commit
```
\
- Likewise, do not use other MAAPI `load_config` variants from the service code. Use the `loadConfigCmds()` or `sharedSetValues()` function to load XML data from a file or a string. See [examples.ncs/scaling-performance/perf-bulkcreate](https://github.com/NSO-developer/nso-examples/tree/6.5/scaling-performance/perf-bulkcreate) for an example.
+ Likewise, do not use other MAAPI `load_config` variants from the service code. Use the `loadConfigCmds()` or `sharedSetValues()` function to load XML data from a file or a string. See [examples.ncs/scaling-performance/perf-bulkcreate](https://github.com/NSO-developer/nso-examples/tree/6.6/scaling-performance/perf-bulkcreate) for an example.
* **Reordering ordered-by-user lists**: If the service code rearranges an ordered-by-user list with items that were created by another service, that other service becomes out of sync. In some cases, you might be able to avoid out-of-sync scenarios by leveraging special XML template syntax (see [Operations on ordered lists and leaf-lists](../../core-concepts/templates.md#ch_templates.order_ops)) or using service stacking with a helper service.
In general, however, you should reconsider your design and try to avoid such scenarios.
@@ -1033,7 +1033,7 @@ A prerequisite (or possibly the product in an iterative approach) is an NSO serv
Alternatively, some parts of the configuration could be managed as out-of-band, in order to simplify and expedite the development of the service model and the mapping logic. But out-of-band data has more limitations when used with service updates. See [Out-of-band Interoperation](../../../operation-and-usage/operations/out-of-band-interoperation.md) for specific disadvantages and carefully consider if out-of-band data is really the right choice.
-In the simplest case, there is only one variant and that is the one that the service needs to produce. Let's take the [examples.ncs/service-management/implement-a-service/iface-v2-py](https://github.com/NSO-developer/nso-examples/tree/6.5/service-management/implement-a-service/iface-v2-py) example and consider what happens when a device already has an existing interface configuration.
+In the simplest case, there is only one variant and that is the one that the service needs to produce. Let's take the [examples.ncs/service-management/implement-a-service/iface-v2-py](https://github.com/NSO-developer/nso-examples/tree/6.6/service-management/implement-a-service/iface-v2-py) example and consider what happens when a device already has an existing interface configuration.
```bash
admin@ncs# show running-config devices device c1 config\
@@ -1351,7 +1351,7 @@ admin@ncs# iface instance2 re-deploy reconcile
Nevertheless, keep in mind that the discard-non-service-config reconcile operation only considers parts of the device configuration under nodes that are created with the service mapping. Even if all data there is covered in the mapping, there could still be other parts that belong to the service but reside in an entirely different section of the device configuration (say DNS configuration under `ip name-server`, which is outside the `interface GigabitEthernet` part) or even a different device. That kind of configuration the `discard-non-service-config` option cannot find on its own and you must add manually.
-You can find the complete `iface` service as part of the [examples.ncs/service-management/discovery](https://github.com/NSO-developer/nso-examples/tree/6.5/service-management/discovery) example.
+You can find the complete `iface` service as part of the [examples.ncs/service-management/discovery](https://github.com/NSO-developer/nso-examples/tree/6.6/service-management/discovery) example.
Since there were only two service instances to reconcile, the process is now complete. In practice, you are likely to encounter multiple variants and many more service instances, requiring you to make additional iterations. But you can follow the iterative process shown here.
@@ -1371,7 +1371,7 @@ It is important to note that `partial-sync-from` and `partial-sync-to` clear the
Pulling the configuration from the network needs to be initiated outside the service code. At the same time, the list of configuration subtrees required by a certain service should be maintained by the service developer. Hence it is a good practice for such a service to implement a wrapper action that invokes the generic `/devices/partial-sync-from` action with the correct list of paths. The user or application that manages the service would only need to invoke the wrapper action without needing to know which parts of the configuration the service is interested in.
-The snippet in the example below shows running the `partial-sync-from` action via Java, using the `router` device from the [examples.ncs/device-management/router-network](https://github.com/NSO-developer/nso-examples/tree/6.5/device-management/router-network) example.
+The snippet in the example below shows running the `partial-sync-from` action via Java, using the `router` device from the [examples.ncs/device-management/router-network](https://github.com/NSO-developer/nso-examples/tree/6.6/device-management/router-network) example.
{% code title="Example of Running partial-sync-from Action via Java API" %}
```java
diff --git a/development/advanced-development/kicker.md b/development/advanced-development/kicker.md
index 538f8940..2c324fd7 100644
--- a/development/advanced-development/kicker.md
+++ b/development/advanced-development/kicker.md
@@ -249,7 +249,7 @@ Monitor expressions are expanded and installed in an internal data structure at
### A Simple Data Kicker Example
-This example is part of the [examples.ncs/service-management/website-service](https://github.com/NSO-developer/nso-examples/tree/6.5/service-management/website-service) example. It consists of an action and a `README_KICKER` file. For all kickers defined in this example, the same action is used. This action is defined in the `website-service` package.
+This example is part of the [examples.ncs/service-management/website-service](https://github.com/NSO-developer/nso-examples/tree/6.6/service-management/website-service) example. It consists of an action and a `README_KICKER` file. For all kickers defined in this example, the same action is used. This action is defined in the `website-service` package.
The following is the YANG snippet for the action definition from the `website.yang` file:
@@ -334,7 +334,7 @@ class WebSiteServiceRFS {
}
```
-We are now ready to start the [examples.ncs/service-management/website-service](https://github.com/NSO-developer/nso-examples/tree/6.5/service-management/website-service) example and define our data kicker. Do the following:
+We are now ready to start the [examples.ncs/service-management/website-service](https://github.com/NSO-developer/nso-examples/tree/6.6/service-management/website-service) example and define our data kicker. Do the following:
```bash
$ make all
@@ -498,7 +498,7 @@ When using both, serializer and priority, only kickers with the same serializer
In this example, we use the same action and setup as in the data kicker example above. The procedure for starting is also the same.
-The [examples.ncs/service-management/website-service](https://github.com/NSO-developer/nso-examples/tree/6.5/service-management/website-service) example has devices that have notifications generated on the stream "interface". We start with defining the notification kicker for a certain `SUBSCRIPTION_NAME = 'mysub'`. This subscription does not exist for the moment and the kicker will therefore not be triggered:
+The [examples.ncs/service-management/website-service](https://github.com/NSO-developer/nso-examples/tree/6.6/service-management/website-service) example has devices that have notifications generated on the stream "interface". We start with defining the notification kicker for a certain `SUBSCRIPTION_NAME = 'mysub'`. This subscription does not exist for the moment and the kicker will therefore not be triggered:
```cli
admin@ncs# config
diff --git a/development/advanced-development/scaling-and-performance-optimization.md b/development/advanced-development/scaling-and-performance-optimization.md
index 052e3e73..b8b5e960 100644
--- a/development/advanced-development/scaling-and-performance-optimization.md
+++ b/development/advanced-development/scaling-and-performance-optimization.md
@@ -194,7 +194,7 @@ For progress trace documentation, see [Progress Trace](progress-trace.md).
### Running the `perf-trans` Example Using a Single Transaction
-The [examples.ncs/scaling-performance/perf-trans](https://github.com/NSO-developer/nso-examples/tree/6.5/scaling-performance/perf-trans) example from the NSO example set explores the opportunities to improve the wall-clock time performance and utilization, as well as opportunities to avoid common pitfalls.
+The [examples.ncs/scaling-performance/perf-trans](https://github.com/NSO-developer/nso-examples/tree/6.6/scaling-performance/perf-trans) example from the NSO example set explores the opportunities to improve the wall-clock time performance and utilization, as well as opportunities to avoid common pitfalls.
The example uses simulated CPU loads for service creation and validation work. Device work is simulated with `sleep()` as it will not run on the same processor in a production system.
@@ -202,15 +202,15 @@ The example shows how NSO can benefit from running many transactions concurrentl
The provided code sets up an NSO instance that exports tracing data to a `.csv` file, provisions one or more service instances, which each map to a device, and shows different (average) transaction times and a graph to visualize the sequences plus concurrency.
-Play with the [examples.ncs/scaling-performance/perf-trans](https://github.com/NSO-developer/nso-examples/tree/6.5/scaling-performance/perf-trans) example by tweaking the `measure.py` script parameters:
+Play with the [examples.ncs/scaling-performance/perf-trans](https://github.com/NSO-developer/nso-examples/tree/6.6/scaling-performance/perf-trans) example by tweaking the `measure.py` script parameters:
```code
plain patch
```
-See the README in the [examples.ncs/scaling-performance/perf-trans](https://github.com/NSO-developer/nso-examples/tree/6.5/scaling-performance/perf-trans) example for details.
+See the README in the [examples.ncs/scaling-performance/perf-trans](https://github.com/NSO-developer/nso-examples/tree/6.6/scaling-performance/perf-trans) example for details.
-To run the [examples.ncs/scaling-performance/perf-trans](https://github.com/NSO-developer/nso-examples/tree/6.5/scaling-performance/perf-trans) example from the NSO example set and recreate the variant shown in the progress trace above:
+To run the [examples.ncs/scaling-performance/perf-trans](https://github.com/NSO-developer/nso-examples/tree/6.6/scaling-performance/perf-trans) example from the NSO example set and recreate the variant shown in the progress trace above:
```bash
cd $NCS_DIR/examples.ncs/scaling-performance/perf-trans
@@ -278,9 +278,9 @@ Suppose a service creates a significant amount of configuration data for devices
#### **Running the `perf-bulkcreate` Example Using a Single Call to MAAPI `shared_set_values()`**
-The [examples.ncs/scaling-performance/perf-bulkcreate](https://github.com/NSO-developer/nso-examples/blob/main/scaling-performance/perf-bulkcreate) example writes configuration to an access control list and a route list of a Cisco Adaptive Security Appliance (ASA) device. It uses either MAAPI Python with a configuration template, `create()` and `set()` calls, Python `shared_set_values()` and `load_config_cmds()`, or Java `sharedSetValues()` and `loadConfigCmds()` to write the configuration in XML format.
+The [examples.ncs/scaling-performance/perf-bulkcreate](https://github.com/NSO-developer/nso-examples/tree/6.6/scaling-performance/perf-bulkcreate) example writes configuration to an access control list and a route list of a Cisco Adaptive Security Appliance (ASA) device. It uses either MAAPI Python with a configuration template, `create()` and `set()` calls, Python `shared_set_values()` and `load_config_cmds()`, or Java `sharedSetValues()` and `loadConfigCmds()` to write the configuration in XML format.
-To run the [examples.ncs/scaling-performance/perf-bulkcreate](https://github.com/NSO-developer/nso-examples/blob/main/scaling-performance/perf-bulkcreate) example using MAAPI Python `create()` and `set()` calls to create 3000 rules and 3000 routes on one device:
+To run the [examples.ncs/scaling-performance/perf-bulkcreate](https://github.com/NSO-developer/nso-examples/tree/6.6/scaling-performance/perf-bulkcreate) example using MAAPI Python `create()` and `set()` calls to create 3000 rules and 3000 routes on one device:
```bash
cd $NCS_DIR/examples.ncs/scaling-performance/perf-bulkcreate
@@ -291,7 +291,7 @@ The commit uses the `no-networking` parameter to skip pushing the configuration
-Next, run the [examples.ncs/scaling-performance/perf-bulkcreate](https://github.com/NSO-developer/nso-examples/blob/main/scaling-performance/perf-bulkcreate) example using a single MAAPI Python `shared_set_values()` call to create 3000 rules and 3000 routes on one device:
+Next, run the [examples.ncs/scaling-performance/perf-bulkcreate](https://github.com/NSO-developer/nso-examples/tree/6.6/scaling-performance/perf-bulkcreate) example using a single MAAPI Python `shared_set_values()` call to create 3000 rules and 3000 routes on one device:
```
./measure.sh -r 3000 -t py_setvals_xml -n true
@@ -319,7 +319,7 @@ Writing to devices and other network elements that are slow to configure will st
### Running the `perf-trans` Example Using One Transaction per Device
-Dividing the service creation and validation work into two separate transactions, one per device, allows the work to be spread across two CPU cores in a multi-core processor. To run the [examples.ncs/scaling-performance/perf-trans](https://github.com/NSO-developer/nso-examples/tree/6.5/scaling-performance/perf-trans) example with the work divided into one transaction per device:
+Dividing the service creation and validation work into two separate transactions, one per device, allows the work to be spread across two CPU cores in a multi-core processor. To run the [examples.ncs/scaling-performance/perf-trans](https://github.com/NSO-developer/nso-examples/tree/6.6/scaling-performance/perf-trans) example with the work divided into one transaction per device:
```bash
cd $NCS_DIR/examples.ncs/scaling-performance/perf-trans
@@ -359,7 +359,7 @@ For commit queue documentation, see [Commit Queue](../../operation-and-usage/ope
### Enabling Commit Queues for the `perf-trans` Example
-Enabling commit queues allows the two transactions to spread the create, validation, and configuration push to devices work across CPU cores in a multi-core processor. Only the CDB write and commit queue write now remain inside the critical section, and the transaction lock is released as soon as the device configuration changes have been written to the commit queues instead of waiting for the config push to the devices to complete. To run the [examples.ncs/scaling-performance/perf-trans](https://github.com/NSO-developer/nso-examples/tree/6.5/scaling-performance/perf-trans) example with the work divided into one transaction per device and commit queues enabled:
+Enabling commit queues allows the two transactions to spread the create, validation, and configuration push to devices work across CPU cores in a multi-core processor. Only the CDB write and commit queue write now remain inside the critical section, and the transaction lock is released as soon as the device configuration changes have been written to the commit queues instead of waiting for the config push to the devices to complete. To run the [examples.ncs/scaling-performance/perf-trans](https://github.com/NSO-developer/nso-examples/tree/6.6/scaling-performance/perf-trans) example with the work divided into one transaction per device and commit queues enabled:
```bash
make stop clean NDEVS=2 python
@@ -390,11 +390,11 @@ Stop NSO and the netsim devices:
make stop
```
-Running the [examples.ncs/scaling-performance/perf-bulkcreate](https://github.com/NSO-developer/nso-examples/blob/main/scaling-performance/perf-bulkcreate) example with two devices and commit queues enabled will produce a similar result.
+Running the [examples.ncs/scaling-performance/perf-bulkcreate](https://github.com/NSO-developer/nso-examples/tree/6.6/scaling-performance/perf-bulkcreate) example with two devices and commit queues enabled will produce a similar result.
### Simplify the Per-Device Concurrent Transaction Creation Using a Nano Service
-The [examples.ncs/scaling-performance/perf-trans](https://github.com/NSO-developer/nso-examples/tree/6.5/scaling-performance/perf-trans) example service uses one transaction per service instance where each service instance configures one device. This enables transactions to run concurrently on separate CPU cores in a multi-core processor. The example sends RESTCONF `patch` requests concurrently to start transactions that run concurrently with the NSO transaction manager. However, dividing the work into multiple processes may not be practical for some applications using the NSO northbound interfaces, e.g., CLI or RESTCONF. Also, it makes a future migration to LSA more complex.
+The [examples.ncs/scaling-performance/perf-trans](https://github.com/NSO-developer/nso-examples/tree/6.6/scaling-performance/perf-trans) example service uses one transaction per service instance where each service instance configures one device. This enables transactions to run concurrently on separate CPU cores in a multi-core processor. The example sends RESTCONF `patch` requests concurrently to start transactions that run concurrently with the NSO transaction manager. However, dividing the work into multiple processes may not be practical for some applications using the NSO northbound interfaces, e.g., CLI or RESTCONF. Also, it makes a future migration to LSA more complex.
To simplify the NSO manager application, a resource-facing nano service (RFS) can start a process per service instance. The NSO manager application or user can then use a single transaction, e.g., CLI or RESTCONF, to configure multiple service instances where the NSO nano service divides the service instances into transactions running concurrently in separate processes.
@@ -416,7 +416,7 @@ Furthermore, the time spent calculating the diff-set, as seen with the `saving r
### Running the CFS and Nano Service enabled `perf-stack` Example
-The [examples.ncs/scaling-performance/perf-stack](https://github.com/NSO-developer/nso-examples/tree/6.5/scaling-performance/perf-stack) example showcases how a CFS on top of a simple resource-facing nano service can be implemented with the [examples.ncs/scaling-performance/perf-trans](https://github.com/NSO-developer/nso-examples/tree/6.5/scaling-performance/perf-trans) example by modifying the existing t3 RFS and adding a CFS. Instead of multiple RESTCONF transactions, the example uses a single CLI CFS service commit that updates the desired number of service instances. The commit configures multiple service instances in a single transaction where the nano service runs each service instance in a separate process to allow multiple cores to be used concurrently.
+The [examples.ncs/scaling-performance/perf-stack](https://github.com/NSO-developer/nso-examples/tree/6.6/scaling-performance/perf-stack) example showcases how a CFS on top of a simple resource-facing nano service can be implemented with the [examples.ncs/scaling-performance/perf-trans](https://github.com/NSO-developer/nso-examples/tree/6.6/scaling-performance/perf-trans) example by modifying the existing t3 RFS and adding a CFS. Instead of multiple RESTCONF transactions, the example uses a single CLI CFS service commit that updates the desired number of service instances. The commit configures multiple service instances in a single transaction where the nano service runs each service instance in a separate process to allow multiple cores to be used concurrently.
@@ -444,7 +444,7 @@ commit trans=2 RFS nwork=1 nwork=1 cq=True device ddelay=1
wall-clock 1s 1s 1s=3s
```
-The two transactions run concurrently, deploying the service in \~3 seconds (plus some overhead) of wall-clock time. Like the [examples.ncs/scaling-performance/perf-trans](https://github.com/NSO-developer/nso-examples/tree/6.5/scaling-performance/perf-trans) example, you can play around with the [examples.ncs/scaling-performance/perf-stack](https://github.com/NSO-developer/nso-examples/tree/6.5/scaling-performance/perf-stack) example by tweaking the parameters.
+The two transactions run concurrently, deploying the service in \~3 seconds (plus some overhead) of wall-clock time. Like the [examples.ncs/scaling-performance/perf-trans](https://github.com/NSO-developer/nso-examples/tree/6.6/scaling-performance/perf-trans) example, you can play around with the [examples.ncs/scaling-performance/perf-stack](https://github.com/NSO-developer/nso-examples/tree/6.6/scaling-performance/perf-stack) example by tweaking the parameters.
```
-d NDEVS
@@ -473,7 +473,7 @@ The two transactions run concurrently, deploying the service in \~3 seconds (plu
Default: 1 second
```
-See the `README` in the [examples.ncs/scaling-performance/perf-stack](https://github.com/NSO-developer/nso-examples/tree/6.5/scaling-performance/perf-stack) example for details. For even more details, see the steps in the `showcase` script.
+See the `README` in the [examples.ncs/scaling-performance/perf-stack](https://github.com/NSO-developer/nso-examples/tree/6.6/scaling-performance/perf-stack) example for details. For even more details, see the steps in the `showcase` script.
Stop NSO and the netsim devices:
@@ -483,7 +483,7 @@ make stop
### Migrating to and Scale Up Using an LSA Setup
-If the processor where NSO runs becomes a severe bottleneck, the CFS can migrate to a layered service architecture (LSA) setup. The [examples.ncs/scaling-performance/perf-stack](https://github.com/NSO-developer/nso-examples/tree/6.5/scaling-performance/perf-stack) example implements stacked services, a CFS abstracting the RFS. It allows for easy migration to an LSA setup to scale with the number of devices or network elements participating in the service deployment. While adding complexity, LSA allows exposing a single CFS instance for all processors instead of one per processor.
+If the processor where NSO runs becomes a severe bottleneck, the CFS can migrate to a layered service architecture (LSA) setup. The [examples.ncs/scaling-performance/perf-stack](https://github.com/NSO-developer/nso-examples/tree/6.6/scaling-performance/perf-stack) example implements stacked services, a CFS abstracting the RFS. It allows for easy migration to an LSA setup to scale with the number of devices or network elements participating in the service deployment. While adding complexity, LSA allows exposing a single CFS instance for all processors instead of one per processor.
{% hint style="info" %}
Before considering taking on the complexity of a multi-NSO node LSA setup, make sure you have done the following:
@@ -502,7 +502,7 @@ Migrating to an LSA setup should only be considered after checking all boxes for
### Running the LSA-enabled `perf-lsa` Example
-The [examples.ncs/scaling-performance/perf-lsa](https://github.com/NSO-developer/nso-examples/tree/6.5/scaling-performance/perf-lsa) example builds on the [examples.ncs/scaling-performance/perf-stack](https://github.com/NSO-developer/nso-examples/tree/6.5/scaling-performance/perf-stack) example and showcases an LSA setup using two RFS NSO instances, `lower-nso-1` and `lower-nso-2`, with a CFS NSO instance, `upper-nso`.
+The [examples.ncs/scaling-performance/perf-lsa](https://github.com/NSO-developer/nso-examples/tree/6.6/scaling-performance/perf-lsa) example builds on the [examples.ncs/scaling-performance/perf-stack](https://github.com/NSO-developer/nso-examples/tree/6.6/scaling-performance/perf-stack) example and showcases an LSA setup using two RFS NSO instances, `lower-nso-1` and `lower-nso-2`, with a CFS NSO instance, `upper-nso`.
@@ -540,7 +540,7 @@ commit ntrans=2 RFS 1 nwork=1 nwork=1 cq=True device ddelay=1
The four transactions run concurrently, two per RFS node, performing the work and configuring the four devices in \~3 seconds (plus some overhead) of wall-clock time.
-You can play with the [examples.ncs/scaling-performance/perf-lsa](https://github.com/NSO-developer/nso-examples/tree/6.5/scaling-performance/perf-lsa) example by tweaking the parameters.
+You can play with the [examples.ncs/scaling-performance/perf-lsa](https://github.com/NSO-developer/nso-examples/tree/6.6/scaling-performance/perf-lsa) example by tweaking the parameters.
```
-d LDEVS
@@ -571,7 +571,7 @@ You can play with the [examples.ncs/scaling-performance/perf-lsa](https://github
Default: 1 second
```
-See the `README` in the [examples.ncs/scaling-performance/perf-lsa](https://github.com/NSO-developer/nso-examples/tree/6.5/scaling-performance/perf-lsa) example for details. For even more details, see the steps in the `showcase` script.
+See the `README` in the [examples.ncs/scaling-performance/perf-lsa](https://github.com/NSO-developer/nso-examples/tree/6.6/scaling-performance/perf-lsa) example for details. For even more details, see the steps in the `showcase` script.
Stop NSO and the netsim devices:
@@ -615,31 +615,6 @@ For small NSO systems, the schema will usually consume more resources than the i
NEDs with a large schema and many YANG models often include a significant number of YANG models that are unused. If RAM usage is an issue, consider removing unused YANG models from such NEDs.
{% endhint %}
-#### Total Committed Memory Impact with Multiple Python VMs
-
-Note that the schema is memory-mapped into shared memory, so even though multiple Python VMs might be started, resident memory usage will not increase proportionally, as the schema is shared between different clients. However, total committed memory (`Committed_AS`) will increase and may cause issues if the `schema size * number of Python VMs` is significant enough that `CommitLimit` is reached.
-
-If increasing the available RAM is not an option, a workaround can be to have all, or a selected subset, of Python-based packages share a `vm-name` and run in the same Python VM thread.
-
-#### Sharing a Python VM Across Packages
-
-To share a Python VM, set the same `vm-name` in each package’s `package-meta-data.xml` file:
-
-{% code title="package-meta-data.xml vm-name config example" overflow="wrap" %}
-```xml
-
- ...
-
- shared
- threading
-
- ...
-
-```
-{% endcode %}
-
-See [The package-meta-data.xml File](../core-concepts/packages.md#d5e4962) for more details. See [Enable Strict Overcommit Accounting](../../administration/installation-and-deployment/system-install.md#enable-strict-overcommit-accounting-on-the-host) or [Overcommit Inside a Container](../../administration/installation-and-deployment/containerized-nso.md#d5e8605) for `Committed_AS` and `CommitLimit` details.
-
#### Note on the Java VM
The Java VM uses its own copy of the schema, which is also why the JVM memory consumption follows the size of the loaded YANG schema.
diff --git a/development/connected-topics/encryption-keys.md b/development/connected-topics/encryption-keys.md
index 526bb78a..0dae6080 100644
--- a/development/connected-topics/encryption-keys.md
+++ b/development/connected-topics/encryption-keys.md
@@ -57,7 +57,7 @@ Example error output:
ERROR=error message
```
-Below is a complete example of an application written in Python providing encryption keys from a plain text file. The application is included in the [examples.ncs/sdk-api/external-encryption-keys](https://github.com/NSO-developer/nso-examples/tree/6.5/sdk-api/external-encryption-keys) example:
+Below is a complete example of an application written in Python providing encryption keys from a plain text file. The application is included in the [examples.ncs/sdk-api/external-encryption-keys](https://github.com/NSO-developer/nso-examples/tree/6.6/sdk-api/external-encryption-keys) example:
```python
#!/usr/bin/env python3
diff --git a/development/connected-topics/scheduler.md b/development/connected-topics/scheduler.md
index ed8988bb..7c1e30a3 100644
--- a/development/connected-topics/scheduler.md
+++ b/development/connected-topics/scheduler.md
@@ -67,7 +67,7 @@ The following list describes the legal special characters and how you can use th
### Scheduling Periodic Compaction
-[Compaction](../../administration/advanced-topics/cdb-persistence.md#compaction) in NSO can take a considerable amount of time, during which transactions could be blocked. To avoid disruption, it might be advantageous to schedule compaction during times of low NSO utilization. This can be done using the NSO scheduler and a service. See [examples.ncs/misc/periodic-compaction](https://github.com/NSO-developer/nso-examples/tree/6.5/misc/periodic-compaction) for an example that demonstrates how to create a periodic compaction service that can be scheduled using the NSO scheduler.
+[Compaction](../../administration/advanced-topics/cdb-persistence.md#compaction) in NSO can take a considerable amount of time, during which transactions could be blocked. To avoid disruption, it might be advantageous to schedule compaction during times of low NSO utilization. This can be done using the NSO scheduler and a service. See [examples.ncs/misc/periodic-compaction](https://github.com/NSO-developer/nso-examples/tree/6.6/misc/periodic-compaction) for an example that demonstrates how to create a periodic compaction service that can be scheduled using the NSO scheduler.
## Scheduling Non-recurring Work
diff --git a/development/connected-topics/snmp-notification-receiver.md b/development/connected-topics/snmp-notification-receiver.md
index d7010039..a680e834 100644
--- a/development/connected-topics/snmp-notification-receiver.md
+++ b/development/connected-topics/snmp-notification-receiver.md
@@ -53,7 +53,7 @@ NSO uses the Java package SNMP4J to parse the SNMP PDUs.
Notification Handlers are user-supplied Java classes that implement the `com.tailf.snmp.snmp4j.NotificationHandler` interface. The `processPDU` method is expected to react on the SNMP4J event, e.g. by mapping the PDU to an NSO alarm. The handlers are registered in the `NotificationReceiver`. The `NotificationReceiver` is the main class that, in addition to maintaining the handlers, also has the responsibility to read the NSO SNMP notification configuration and set up `SNMP4J` listeners accordingly.
-An example of a notification handler can be found at [examples.ncs/device-management/snmp-notification-receiver](https://github.com/NSO-developer/nso-examples/tree/6.5/device-management/snmp-notification-receiver). This example handler receives notifications and sets an alarm text if the notification is an `IF-MIB::linkDown trap`.
+An example of a notification handler can be found at [examples.ncs/device-management/snmp-notification-receiver](https://github.com/NSO-developer/nso-examples/tree/6.6/device-management/snmp-notification-receiver). This example handler receives notifications and sets an alarm text if the notification is an `IF-MIB::linkDown trap`.
```java
public class ExampleHandler implements NotificationHandler {
diff --git a/development/core-concepts/api-overview/java-api-overview.md b/development/core-concepts/api-overview/java-api-overview.md
index 518c877f..8c17eb7c 100644
--- a/development/core-concepts/api-overview/java-api-overview.md
+++ b/development/core-concepts/api-overview/java-api-overview.md
@@ -283,7 +283,7 @@ Write operations that do not attempt to obtain the subscription lock, are allowe
To view registered subscribers, use the `ncs --status` command. For details on how to use the different subscription functions, see the Javadoc for NSO Java API.
-The code in the [examples.ncs/sdk-api/cdb-java](https://github.com/NSO-developer/nso-examples/tree/6.5/sdk-api/cdb-java) example illustrates three different types of CDB subscribers:
+The code in the [examples.ncs/sdk-api/cdb-java](https://github.com/NSO-developer/nso-examples/tree/6.6/sdk-api/cdb-java) example illustrates three different types of CDB subscribers:
* A simple CDB config subscriber that utilizes the low-level CDB API directly to subscribe to changes in the subtree of the configuration.
* Two Navu CDB subscribers, one subscribing to configuration changes, and one subscribing to changes in operational data.
@@ -292,7 +292,7 @@ The code in the [examples.ncs/sdk-api/cdb-java](https://github.com/NSO-developer
The DP API makes it possible to create callbacks which are called when certain events occur in NSO. As the name of the API indicates, it is possible to write data provider callbacks that provide data to NSO that is stored externally. However, this is only one of several callback types provided by this API. There exist callback interfaces for the following types:
-* Service Callbacks - invoked for service callpoints in the YANG model. Implements service to device information mappings. See, for example, [examples.ncs/service-management/rfs-service](https://github.com/NSO-developer/nso-examples/tree/6.5/service-management/rfs-service).
+* Service Callbacks - invoked for service callpoints in the YANG model. Implements service to device information mappings. See, for example, [examples.ncs/service-management/rfs-service](https://github.com/NSO-developer/nso-examples/tree/6.6/service-management/rfs-service).
* Action Callbacks - invoked for a certain action in the YANG model which is defined with a callpoint directive.
* Authentication Callbacks - invoked for external authentication functions.
* Authorization Callbacks - invoked for external authorization of operations and data. Note, avoid this callback if possible since performance will otherwise be affected.
@@ -417,7 +417,7 @@ We also have two additional optional callbacks that may be implemented for effic
* `getObject()`: If this optional callback is implemented, the work of the callback is to return an entire `object`, i.e., a list instance. This is not the same `getObject()` as the one that is used in combination with the `iterator()`
* `numInstances()`: When NSO needs to figure out how many instances we have of a certain element, by default NSO will repeatedly invoke the `iterator()` callback. If this callback is installed, it will be called instead.
-The following example illustrates an external data provider. The example is possible to run from the examples collection. It resides under [examples.ncs/sdk-api/external-db](https://github.com/NSO-developer/nso-examples/tree/6.5/sdk-api/external-db).
+The following example illustrates an external data provider. The example is possible to run from the examples collection. It resides under [examples.ncs/sdk-api/external-db](https://github.com/NSO-developer/nso-examples/tree/6.6/sdk-api/external-db).
The example comes with a tailor-made database - `MyDb`. That source code is provided with the example but not shown here. However, the functionality will be obvious from the method names like `newItem()`, `lock()`, `save()`, etc.
@@ -684,7 +684,7 @@ The action callbacks are:
* `init()` Similar to the transaction `init()` callback. However note that, unlike the case with transaction and data callbacks, both `init()` and `action()` are registered for each `actionpoint` (i.e. different action points can have different `init()` callbacks), and there is no `finish()` callback - the action is completed when the `action()` callback returns.
* `action()` This callback is invoked to actually execute the `rpc` or `action`. It receives the input parameters (if any) and returns the output parameters (if any).
-In the [examples.ncs/service-management/mpls-vpn-java](https://github.com/NSO-developer/nso-examples/tree/6.5/service-management/mpls-vpn-java) example, we can define a `self-test` action. In the `packages/l3vpn/src/yang/l3vpn.yang`, we locate the service callback definition:
+In the [examples.ncs/service-management/mpls-vpn-java](https://github.com/NSO-developer/nso-examples/tree/6.6/service-management/mpls-vpn-java) example, we can define a `self-test` action. In the `packages/l3vpn/src/yang/l3vpn.yang`, we locate the service callback definition:
```
uses ncs:service-data;
@@ -761,38 +761,32 @@ The transaction validation callbacks are:
* `init()`: This callback is invoked when the validation phase starts. It will typically attach to the current transaction:
-{% code title="Example: Attach Maapi to the Current Transaction" %}
-````
-```
- public class SimpleValidator implements DpTransValidateCallback{
- ...
- @TransValidateCallback(callType=TransValidateCBType.INIT)
- public void init(DpTrans trans) throws DpCallbackException{
- try {
- th = trans.thandle;
- maapi.attach(th, new MyNamesapce().hash(), trans.uinfo.usid);
- ..
- } catch(Exception e) {
- throw new DpCallbackException("failed to attach via maapi: "+
- e.getMessage());
- }
- }
+{% code title="Example: Attach Maapi to the Current Transaction" overflow="wrap" %}
+```java
+public class SimpleValidator implements DpTransValidateCallback{
+ ...
+ @TransValidateCallback(callType=TransValidateCBType.INIT)
+ public void init(DpTrans trans) throws DpCallbackException{
+ try {
+ th = trans.thandle;
+ maapi.attach(th, new MyNamesapce().hash(), trans.uinfo.usid);
+ ..
+ }
+ catch(Exception e) {
+ throw new DpCallbackException("failed to attach via maapi: "+ e.getMessage());
+ }
+ }
}
```
-````
{% endcode %}
-```
-\
-```
-
* `stop()`: This callback is invoked when the validation phase ends. If `init()` attached to the transaction, `stop()` should detach from it.
The actual validation logic is implemented in a validation callback:
* `validate()`: This callback is invoked for a specific validation point.
-### Transforms
+#### Transforms
Transforms implement a mapping between one part of the data model - the front-end of the transform - and another part - the back-end of the transform. Typically the front-end is visible to northbound interfaces, while the back-end is not, but for operational data (`config false` in the data model), a transform may implement a different view (e.g. aggregation) of data that is also visible without going through the transform.
@@ -800,7 +794,7 @@ The implementation of a transform uses techniques already described in this sect
To specify that the front-end data is provided by a transform, the data model uses the `tailf:callpoint` statement with a `tailf:transform true` substatement. Since transforms do not participate in the two-phase commit protocol, they only need to register the `init()` and `finish()` transaction callbacks. The `init()` callback attaches to the transaction and `finish()` detaches from it. Also, a transform for operational data only needs to register the data callbacks that read data, i.e. `getElem()`, `existsOptional()`, etc.
-### Hooks
+#### Hooks
Hooks make it possible to have changes to the configuration trigger additional changes. In general, this should only be done when the data that is written by the hook is not visible to northbound interfaces since otherwise, the additional changes will make it difficult e.g. EMS or NMS systems to manage the configuration - the complete configuration resulting from a given change cannot be predicted. However, one use case in NSO for hooks that trigger visible changes is precisely to model-managed devices that have this behavior: hooks in the device model can emulate what the device does on certain configuration changes, and thus the device configuration in NSO remains in sync with the actual device configuration.
@@ -808,11 +802,11 @@ The implementation technique for a hook is very similar to that for a transform.
To specify that changes to some part of the configuration should trigger a hook invocation, the data model uses the `tailf:callpoint` statement with a `tailf:set-hook` or `tailf:transaction-hook` substatement. A set-hook is invoked immediately when a northbound agent requests a write operation on the data, while a transaction-hook is invoked when the transaction is committed. For the NSO-specific use case mentioned above, a `set-hook` should be used. The `tailf:set-hook` and `tailf:transaction-hook` statements take an argument specifying the extent of the data model the hook applies to.
-## NED API
+### NED API
-NSO can speak southbound to an arbitrary management interface. This is of course not entirely automatic like with NETCONF or SNMP, and depending on the type of interface the device has for configuration, this may involve some programming. Devices with a Cisco-style CLI can however be managed by writing YANG models describing the data in the CLI, and a relatively thin layer of Java code to handle the communication to the devices. Refer to [Network Element Drivers (NEDs)](../../advanced-development/developing-neds/) for more information.
+NSO can speak southbound to an arbitrary management interface. This is of course not entirely automatic like with NETCONF or SNMP, and depending on the type of interface the device has for configuration, this may involve some programming. Devices with a Cisco-style CLI can however be managed by writing YANG models describing the data in the CLI, and a relatively thin layer of Java code to handle the communication to the devices. Refer to Network Element Drivers (NEDs) for more information.
-## NAVU API
+### NAVU API
The NAVU API provides a DOM-driven approach to navigate the NSO service and device models. The main features of the NAVU API are dynamic schema loading at start-up and lazy loading of instance data. The navigation model is based on the YANG language structure. In addition to navigation and reading of values, NAVU also provides methods to modify the data model. Furthermore, it supports the execution of actions modeled in the service model.
@@ -822,7 +816,7 @@ NAVU requires all models i.e. the complete NSO service model with all its augmen
The `ncsc` tool can also generate Java classes from the .yang files. These files, extending the `ConfNamespace` base class, are the Java representation of the models and contain all defined nametags and their corresponding hash values. These Java classes can, optionally, be used as help classes in the service applications to make NAVU navigation type-safe, e.g. eliminating errors from misspelled model container names.
-
NAVU Design Support
+
NAVU Design Support
The service models are loaded at start-up and are always the latest version. The models are always traversed in a lazy fashion i.e. data is only loaded when it is needed. This is to minimize the amount of data transferred between NSO and the service applications.
@@ -833,7 +827,7 @@ The most important classes of NAVU are the classes implementing the YANG node ty
* `NavuListEntry`: list node entry.
* `NavuLeaf`: the NavuLeaf represents a YANG leaf node.
-
NAVU YANG Structure
+
NAVU YANG Structure
The remaining part of this section will guide us through the most useful features of the NAVU. Should further information be required, please refer to the corresponding Javadoc pages.
@@ -853,7 +847,7 @@ module tailf-ncs {
{% endcode %}
{% code title="Example: NSO NavuContainer Instance" %}
-```
+```java
.....
NavuContext context = new NavuContext(maapi);
context.startRunningTrans(Conf.MODE_READ);
@@ -900,7 +894,7 @@ submodule tailf-ncs-devices {
If the purpose is to directly access a list node, we would typically do a direct navigation to the list element using the NAVU primitives.
{% code title="Example: NAVU List Direct Element Access" %}
-```
+```java
.....
NavuContext context = new NavuContext(maapi);
context.startRunningTrans(Conf.MODE_READ);
@@ -920,7 +914,7 @@ If the purpose is to directly access a list node, we would typically do a direct
Or if we want to iterate over all elements of a list we could do as follows.
{% code title="Example: NAVU List Element Iterating" %}
-```
+```java
.....
NavuContext context = new NavuContext(maapi);
context.startRunningTrans(Conf.MODE_READ);
@@ -943,7 +937,7 @@ The above example uses the `select()` which uses a recursive regexp match agains
Alternatively, if the purpose is to drill down deep into a structure we should use `select()`. The `select()` offers a wild card-based search. The search is relative and can be performed from any node in the structure.
{% code title="Example: NAVU Leaf Access" %}
-```
+```java
.....
NavuContext context = new NavuContext(maapi);
context.startRunningTrans(Conf.MODE_READ);
@@ -965,7 +959,7 @@ All of the above are valid ways of traversing the lists depending on the purpose
An alternative method is to use the `xPathSelect()` where an XPath query could be issued instead.
{% code title="Example: NAVU Leaf Access" %}
-```
+```java
.....
NavuContext context = new NavuContext(maapi);
context.startRunningTrans(Conf.MODE_READ);
@@ -1018,7 +1012,7 @@ module tailf-ncs {
To read and update a leaf, we simply navigate to the leaf and request the value. And in the same manner, we can update the value.
{% code title="Example: NAVU List Element Iterating" %}
-```
+```java
.....
NavuContext context = new NavuContext(maapi);
context.startRunningTrans(Conf.MODE_READ);
@@ -1094,7 +1088,7 @@ module interfaces {
To execute the action below we need to access a device with this module loaded. This is done in a similar way to non-action nodes.
{% code title="Example: NAVU Action Execution (1)" %}
-```
+```java
.....
NavuContext context = new NavuContext(maapi);
context.startRunningTrans(Conf.MODE_READ);
@@ -1140,7 +1134,7 @@ To execute the action below we need to access a device with this module loaded.
Or, we could do it with `xPathSelect()`.
{% code title="Example: NAVU Action Execution (2)" %}
-```
+```java
.....
NavuContext context = new NavuContext(maapi);
context.startRunningTrans(Conf.MODE_READ);
diff --git a/development/core-concepts/api-overview/python-api-overview.md b/development/core-concepts/api-overview/python-api-overview.md
index cb2cbe00..c3c11a0d 100644
--- a/development/core-concepts/api-overview/python-api-overview.md
+++ b/development/core-concepts/api-overview/python-api-overview.md
@@ -1147,7 +1147,7 @@ print("/operdata/value is now %s" % new_value)
The Python `_ncs.events` low-level module provides an API for subscribing to and processing NSO event notifications. Typically, the event notification API is used by applications that manage NSO using the SDK API using, for example, MAAPI or for debug purposes. In addition to subscribing to the various events, streams available over other northbound interfaces, such as NETCONF, RESTCONF, etc., can be subscribed to as well.
-See [`examples.ncs/sdk-api/event-notifications`](https://github.com/NSO-developer/nso-examples/tree/6.5/sdk-api/event-notifications) for an example. The [`examples.ncs/common/event_notifications.py`](https://github.com/NSO-developer/nso-examples/tree/6.5/common/event_notifications.py) Python script used by the example can also be used as a standalone application to, for example, debug any NSO instance.
+See [`examples.ncs/sdk-api/event-notifications`](https://github.com/NSO-developer/nso-examples/tree/6.6/sdk-api/event-notifications) for an example. The [`examples.ncs/common/event_notifications.py`](https://github.com/NSO-developer/nso-examples/tree/6.6/common/event_notifications.py) Python script used by the example can also be used as a standalone application to, for example, debug any NSO instance.
## Advanced Topics
@@ -1228,3 +1228,75 @@ Functions and methods that accept the `load_schemas` argument:
* `ncs.maapi.Maapi() constructor`
* `ncs.maapi.single_read_trans()`
* `ncs.maapi.single_write_trans()`
+
+### The way of using `multiprocessing.Process`
+When using multiprocessing in NSO, the default start method is now `spawn` instead of `fork`.
+With the `spawn` method, a new Python interpreter process is started, and all arguments passed to `multiprocessing.Process` must be picklable.
+
+If you pass Python objects that reference low-level C structures (for example `_ncs.dp.DaemonCtxRef` or `_ncs.UserInfo`), Python will raise an error like:
+
+```python
+TypeError: cannot pickle '