diff --git a/README.md b/README.md index df1afbe6..ed83e91e 100644 --- a/README.md +++ b/README.md @@ -2,7 +2,24 @@ description: Get started with the Cisco Crosswork NSO documentation guides. icon: power-off cover: images/gb-cover-final.png -coverY: 0 +coverY: -32.31167466986795 +layout: + width: default + cover: + visible: true + size: hero + title: + visible: true + description: + visible: true + tableOfContents: + visible: true + outline: + visible: true + pagination: + visible: true + metadata: + visible: true --- # Start diff --git a/administration/advanced-topics/layered-service-architecture.md b/administration/advanced-topics/layered-service-architecture.md index 580e663b..1d808d33 100644 --- a/administration/advanced-topics/layered-service-architecture.md +++ b/administration/advanced-topics/layered-service-architecture.md @@ -97,9 +97,9 @@ Finally, if the two-layer approach proves to be insufficient due to requirements ### Greenfield LSA Application -This section describes a small LSA application, which exists as a running example in the [examples.ncs/layered-services-architecture/lsa-single-version-deployment](https://github.com/NSO-developer/nso-examples/tree/6.5/layered-services-architecture/lsa-single-version-deployment) directory. +This section describes a small LSA application, which exists as a running example in the [examples.ncs/layered-services-architecture/lsa-single-version-deployment](https://github.com/NSO-developer/nso-examples/tree/6.6/layered-services-architecture/lsa-single-version-deployment) directory. -The application is a slight variation on the [examples.ncs/service-management/rfs-service](https://github.com/NSO-developer/nso-examples/tree/6.5/service-management/rfs-service) example where the YANG code has been split up into an upper-layer and a lower-layer implementation. The example topology (based on netsim for the managed devices, and NSO for the upper/lower layer NSO instances) looks like the following: +The application is a slight variation on the [examples.ncs/service-management/rfs-service](https://github.com/NSO-developer/nso-examples/tree/6.6/service-management/rfs-service) example where the YANG code has been split up into an upper-layer and a lower-layer implementation. The example topology (based on netsim for the managed devices, and NSO for the upper/lower layer NSO instances) looks like the following:

Example LSA architecture

@@ -425,7 +425,7 @@ To conclude this section, the final remark here is that to design a good LSA app ### Greenfield LSA Application Designed for Easy Scaling -In this section, we'll describe a lightly modified version of the example in the previous section. The application we describe here exists as a running example under [examples.ncs/layered-services-architecture/lsa-scaling](https://github.com/NSO-developer/nso-examples/tree/6.5/layered-services-architecture/lsa-scaling). +In this section, we'll describe a lightly modified version of the example in the previous section. The application we describe here exists as a running example under [examples.ncs/layered-services-architecture/lsa-scaling](https://github.com/NSO-developer/nso-examples/tree/6.6/layered-services-architecture/lsa-scaling). Sometimes it is desirable to be able to easily move devices from one lower LSA node to another. This makes it possible to easily expand or shrink the number of lower LSA nodes. Additionally, it is sometimes desirable to avoid HA pairs for replication but instead use a common store for all lower LSA devices, such as a distributed database, or a common file system. @@ -531,7 +531,7 @@ If we do not have the luxury of designing our NSO service application from scrat Usually, the reasons for re-architecting an existing application are performance-related. -In the NSO example collection, two popular examples are the [examples.ncs/service-management/mpls-vpn-java](https://github.com/NSO-developer/nso-examples/tree/6.5/service-management/mpls-vpn-java) and [examples.ncs/service-management/mpls-vpn-python](https://github.com/NSO-developer/nso-examples/tree/6.5/service-management/mpls-vpn-python) examples. Those example contains an almost "real" VPN provisioning example whereby VPNs are provisioned in a network of CPEs, PEs, and P routers according to this picture: +In the NSO example collection, two popular examples are the [examples.ncs/service-management/mpls-vpn-java](https://github.com/NSO-developer/nso-examples/tree/6.6/service-management/mpls-vpn-java) and [examples.ncs/service-management/mpls-vpn-python](https://github.com/NSO-developer/nso-examples/tree/6.6/service-management/mpls-vpn-python) examples. Those example contains an almost "real" VPN provisioning example whereby VPNs are provisioned in a network of CPEs, PEs, and P routers according to this picture:

VPN network

@@ -592,7 +592,7 @@ By far the easiest way to change an existing monolithic NSO application into the In this example, the topology information is stored in a separate container `share-data` and propagated to the LSA nodes by means of service code. -The example [examples.ncs/layered-services-architecture/mpls-vpn-lsa](https://github.com/NSO-developer/nso-examples/tree/6.5/layered-services-architecture/mpls-vpn-lsa) example does exactly this, the upper layer data model in `upper-nso/packages/l3vpn/src/yang/l3vpn.yang` now looks as: +The example [examples.ncs/layered-services-architecture/mpls-vpn-lsa](https://github.com/NSO-developer/nso-examples/tree/6.6/layered-services-architecture/mpls-vpn-lsa) example does exactly this, the upper layer data model in `upper-nso/packages/l3vpn/src/yang/l3vpn.yang` now looks as: ```yang list l3vpn { @@ -765,7 +765,7 @@ Deployment of an LSA cluster where all the nodes have the same major version of The choice between the two deployment options depends on your functional needs. The single version is easier to maintain and is a good starting point but is less flexible. While it is possible to migrate from one to the other, the migration from a single version to a multi-version is typically easier than the other way around. Still, every migration requires some effort, so it is best to pick one approach and stick to it. -You can find working examples of both deployment types in the [examples.ncs/layered-services-architecture/lsa-single-version-deployment](https://github.com/NSO-developer/nso-examples/tree/6.5/layered-services-architecture/lsa-single-version-deployment) and [examples.ncs/layered-services-architecture/lsa-multi-version-deployment](https://github.com/NSO-developer/nso-examples/tree/6.5/layered-services-architecture/lsa-multi-version-deployment) folders, respectively. +You can find working examples of both deployment types in the [examples.ncs/layered-services-architecture/lsa-single-version-deployment](https://github.com/NSO-developer/nso-examples/tree/6.6/layered-services-architecture/lsa-single-version-deployment) and [examples.ncs/layered-services-architecture/lsa-multi-version-deployment](https://github.com/NSO-developer/nso-examples/tree/6.6/layered-services-architecture/lsa-multi-version-deployment) folders, respectively. ### RFS Nodes Setup @@ -912,7 +912,7 @@ Once you have both, the CFS and device-compiled RFS service packages are ready; ### Example Walkthrough -You can see all the required setup steps for a single version deployment performed in the example [examples.ncs/layered-services-architecture/lsa-single-version-deployment](https://github.com/NSO-developer/nso-examples/tree/6.5/layered-services-architecture/lsa-single-version-deployment) and the [examples.ncs/layered-services-architecture/lsa-multi-version-deployment](https://github.com/NSO-developer/nso-examples/tree/6.5/layered-services-architecture/lsa-multi-version-deployment) has the steps for the multi-version one. The two are quite similar but the multi-version deployment has additional steps, so it is the one described here. +You can see all the required setup steps for a single version deployment performed in the example [examples.ncs/layered-services-architecture/lsa-single-version-deployment](https://github.com/NSO-developer/nso-examples/tree/6.6/layered-services-architecture/lsa-single-version-deployment) and the [examples.ncs/layered-services-architecture/lsa-multi-version-deployment](https://github.com/NSO-developer/nso-examples/tree/6.6/layered-services-architecture/lsa-multi-version-deployment) has the steps for the multi-version one. The two are quite similar but the multi-version deployment has additional steps, so it is the one described here. First, build the example for manual setup. @@ -1172,7 +1172,7 @@ Likewise, you can return to the Single-Version Deployment, by upgrading the RFS All these `ned-id` changes stem from the fact that the upper-layer CFS node treats the lower-layer RFS node as a managed device, requiring the correct model, just like it does for any other device type. For the same reason, maintenance (bug fix or patch) NSO upgrades do not result in a changed `ned-id`, so for those, no migration is necessary. -The [NSO example set](https://github.com/NSO-developer/nso-examples/tree/6.5/layered-services-architecture) illustrates different aspects of LSA deployment including working with single- and multi-version deployments. +The [NSO example set](https://github.com/NSO-developer/nso-examples/tree/6.6/layered-services-architecture) illustrates different aspects of LSA deployment including working with single- and multi-version deployments. ### User Authorization Passthrough diff --git a/administration/installation-and-deployment/containerized-nso.md b/administration/installation-and-deployment/containerized-nso.md index f77ac43a..da4045f6 100644 --- a/administration/installation-and-deployment/containerized-nso.md +++ b/administration/installation-and-deployment/containerized-nso.md @@ -48,7 +48,7 @@ Consult the [Installation](./) documentation for information on installing NSO o {% hint style="info" %} See [Developing and Deploying a Nano Service](deployment/develop-and-deploy-a-nano-service.md) for an example that uses the container to deploy an SSH-key-provisioning nano service. -The README in the [examples.ncs/getting-started/netsim-sshkey](https://github.com/NSO-developer/nso-examples/tree/6.5/getting-started/netsim-sshkey) example provides a link to the container-based deployment variant of the example. See the `setup_ncip.sh` script and `README` in the `netsim-sshkey` deployment example for details. +The README in the [examples.ncs/getting-started/netsim-sshkey](https://github.com/NSO-developer/nso-examples/tree/6.6/getting-started/netsim-sshkey) example provides a link to the container-based deployment variant of the example. See the `setup_ncip.sh` script and `README` in the `netsim-sshkey` deployment example for details. {% endhint %} ### Build Image @@ -195,7 +195,7 @@ If you need to perform operations before or after the `ncs` process is started i NSO is installed with the `--run-as-user` option for build and production containers to run NSO from the non-root `nso` user that belongs to the `nso` user group. -When migrating from container versions where NSO has `root` privilege, ensure the `nso` user owns or has access rights to the required files and directories. Examples include application directories, SSH host keys, SSH keys used to authenticate with devices, etc. See the deployment example variant referenced by the [examples.ncs/getting-started/netsim-sshkey/README.md](https://github.com/NSO-developer/nso-examples/tree/6.5/getting-started/netsim-sshkey) for an example. +When migrating from container versions where NSO has `root` privilege, ensure the `nso` user owns or has access rights to the required files and directories. Examples include application directories, SSH host keys, SSH keys used to authenticate with devices, etc. See the deployment example variant referenced by the [examples.ncs/getting-started/netsim-sshkey/README.md](https://github.com/NSO-developer/nso-examples/tree/6.6/getting-started/netsim-sshkey) for an example. The NSO container runs a script called `take-ownership.sh` as part of its startup, which takes ownership of all the directories that NSO needs. The script will be one of the first things to run. The script can be overridden to take ownership of even more directories, such as mounted volumes or bind mounts. @@ -625,7 +625,7 @@ This example covers the necessary information to manifest the use of NSO images #### **Packages** -The packages used in this example are taken from the [examples.ncs/getting-started/netsim-sshkey](https://github.com/NSO-developer/nso-examples/tree/6.5/getting-started/netsim-sshkey) example: +The packages used in this example are taken from the [examples.ncs/getting-started/netsim-sshkey](https://github.com/NSO-developer/nso-examples/tree/6.6/getting-started/netsim-sshkey) example: * `distkey`: A simple Python + template service package that automates the setup of SSH public key authentication between netsim (ConfD) devices and NSO using a nano service. * `ne`: A NETCONF NED package representing a netsim network element that implements a configuration subscriber Python application that adds or removes the configured public key, which the netsim (ConfD) network element checks when authenticating public key authentication clients. diff --git a/administration/installation-and-deployment/deployment/deployment-example.md b/administration/installation-and-deployment/deployment/deployment-example.md index 7d927aa5..a39caf89 100644 --- a/administration/installation-and-deployment/deployment/deployment-example.md +++ b/administration/installation-and-deployment/deployment/deployment-example.md @@ -4,7 +4,7 @@ description: Understand NSO deployment with an example setup. # Deployment Example -This section shows examples of a typical deployment for a highly available (HA) setup. A reference to an example implementation of the `tailf-hcc` layer-2 upgrade deployment scenario described here, check the NSO example set under [examples.ncs/high-availability/hcc](https://github.com/NSO-developer/nso-examples/tree/6.5/high-availability/hcc). The example covers the following topics: +This section shows examples of a typical deployment for a highly available (HA) setup. A reference to an example implementation of the `tailf-hcc` layer-2 upgrade deployment scenario described here, check the NSO example set under [examples.ncs/high-availability/hcc](https://github.com/NSO-developer/nso-examples/tree/6.6/high-availability/hcc). The example covers the following topics: * Installation of NSO on all nodes in an HA setup * Initial configuration of NSO on all nodes @@ -175,9 +175,9 @@ The NSO HA, together with the `tailf-hcc` package, provides three features: * If the leader/primary fails, a follower/secondary takes over and starts to act as leader/primary. This is how HA Raft works and how the rule-based HA variant of this example is configured to handle failover automatically. * At failover, `tailf-hcc` sets up a virtual alias IP address on the leader/primary node only and uses gratuitous ARP packets to update all nodes in the network with the new mapping to the leader/primary node. -Nodes in other networks can be updated using the `tailf-hcc` layer-3 BGP functionality or a load balancer. See the `load-balancer`and `hcc`examples in the NSO example set under [examples.ncs/high-availability](https://github.com/NSO-developer/nso-examples/tree/6.5/high-availability). +Nodes in other networks can be updated using the `tailf-hcc` layer-3 BGP functionality or a load balancer. See the `load-balancer`and `hcc`examples in the NSO example set under [examples.ncs/high-availability](https://github.com/NSO-developer/nso-examples/tree/6.6/high-availability). -See the NSO example set under [examples.ncs/high-availability/hcc](https://github.com/NSO-developer/nso-examples/tree/6.5/high-availability/hcc) for a reference to an HA Raft and rule-based HA `tailf-hcc` Layer 3 BGP examples. +See the NSO example set under [examples.ncs/high-availability/hcc](https://github.com/NSO-developer/nso-examples/tree/6.6/high-availability/hcc) for a reference to an HA Raft and rule-based HA `tailf-hcc` Layer 3 BGP examples. The HA Raft and rule-based HA upgrade-l2 examples also demonstrate HA failover, upgrading the NSO version on all nodes, and upgrading NSO packages on all nodes. @@ -211,7 +211,7 @@ The NSO system installations performed on the nodes in the HA cluster also insta ### Syslog -For the HA Raft and rule-based HA upgrade-l2 examples, see the reference from the `README` in the [examples.ncs/high-availability/hcc](https://github.com/NSO-developer/nso-examples/tree/6.5/high-availability/hcc) example directory; the examples integrate with `rsyslog` to log the `ncs`, `developer`, `upgrade`, `audit`, `netconf`, `snmp`, and `webui-access` logs to syslog with `facility` set to `daemon` in `ncs.conf`. +For the HA Raft and rule-based HA upgrade-l2 examples, see the reference from the `README` in the [examples.ncs/high-availability/hcc](https://github.com/NSO-developer/nso-examples/tree/6.6/high-availability/hcc) example directory; the examples integrate with `rsyslog` to log the `ncs`, `developer`, `upgrade`, `audit`, `netconf`, `snmp`, and `webui-access` logs to syslog with `facility` set to `daemon` in `ncs.conf`. `rsyslogd` on the nodes in the HA cluster is configured to write the daemon facility logs to `/var/log/daemon.log`, and forward the daemon facility logs with the severity `info` or higher to the manager node's `/var/log/ha-cluster.log` syslog. @@ -345,4 +345,4 @@ $ cat /etc/ncs/ipc_access ....... ``` -For an HA setup, HA Raft is based on the Raft consensus algorithm and provides the best fault tolerance, performance, and security. It is therefore recommended over the legacy rule-based HA variant. The `raft-upgrade-l2` project, referenced from the NSO example set under [examples.ncs/high-availability/hcc](https://github.com/NSO-developer/nso-examples/tree/6.4/high-availability/hcc), together with this Deployment Example section, describes a reference implementation. See [NSO HA Raft](../../management/high-availability.md#ug.ha.raft) for more HA Raft details. +For an HA setup, HA Raft is based on the Raft consensus algorithm and provides the best fault tolerance, performance, and security. It is therefore recommended over the legacy rule-based HA variant. The `raft-upgrade-l2` project, referenced from the NSO example set under [examples.ncs/high-availability/hcc](https://github.com/NSO-developer/nso-examples/tree/6.6/high-availability/hcc), together with this Deployment Example section, describes a reference implementation. See [NSO HA Raft](../../management/high-availability.md#ug.ha.raft) for more HA Raft details. diff --git a/administration/installation-and-deployment/deployment/develop-and-deploy-a-nano-service.md b/administration/installation-and-deployment/deployment/develop-and-deploy-a-nano-service.md index 27b9ae3d..8157518a 100644 --- a/administration/installation-and-deployment/deployment/develop-and-deploy-a-nano-service.md +++ b/administration/installation-and-deployment/deployment/develop-and-deploy-a-nano-service.md @@ -4,7 +4,7 @@ description: Develop and deploy a nano service using a guided example. # Develop and Deploy a Nano Service -This section shows how to develop and deploy a simple NSO nano service for managing the provisioning of SSH public keys for authentication. For more details on nano services, see [Nano Services for Staged Provisioning](../../../development/core-concepts/nano-services.md) in Development. The example showcasing development is available under [examples.ncs/getting-started/netsim-sshkey](https://github.com/NSO-developer/nso-examples/tree/6.5/getting-started/netsim-sshkey). In addition, there is a reference from the `README` in the example's directory to the deployment version of the example. +This section shows how to develop and deploy a simple NSO nano service for managing the provisioning of SSH public keys for authentication. For more details on nano services, see [Nano Services for Staged Provisioning](../../../development/core-concepts/nano-services.md) in Development. The example showcasing development is available under [examples.ncs/getting-started/netsim-sshkey](https://github.com/NSO-developer/nso-examples/tree/6.6/getting-started/netsim-sshkey). In addition, there is a reference from the `README` in the example's directory to the deployment version of the example. ## Development @@ -424,4 +424,4 @@ Two scripts showcase the nano service: As with the development version, both scripts will demo the service by generating keys, distributing the public key, and configuring NSO for public key authentication with the network elements. -To run the example and for more details, see the instructions in the `README` file of the [examples.ncs/getting-started/netsim-sshkey](https://github.com/NSO-developer/nso-examples/tree/6.5/getting-started/netsim-sshkey) deployment example. +To run the example and for more details, see the instructions in the `README` file of the [examples.ncs/getting-started/netsim-sshkey](https://github.com/NSO-developer/nso-examples/tree/6.6/getting-started/netsim-sshkey) deployment example. diff --git a/administration/installation-and-deployment/deployment/secure-deployment.md b/administration/installation-and-deployment/deployment/secure-deployment.md index 814410d6..04ba84fe 100644 --- a/administration/installation-and-deployment/deployment/secure-deployment.md +++ b/administration/installation-and-deployment/deployment/secure-deployment.md @@ -63,7 +63,7 @@ Running NSO with minimal privileges is a fundamental security best practice: 1. `# chown root cmdwrapper` 2. `# chmod u+s cmdwrapper` -* The deployment variant referenced in the README file of the [examples.ncs/getting-started/netsim-sshkey](https://github.com/NSO-developer/nso-examples/tree/6.5/getting-started/netsim-sshkey) example provides a native and NSO production container based example. +* The deployment variant referenced in the README file of the [examples.ncs/getting-started/netsim-sshkey](https://github.com/NSO-developer/nso-examples/tree/6.6/getting-started/netsim-sshkey) example provides a native and NSO production container based example. ## Authentication, Authorization, and Accounting (AAA) @@ -131,7 +131,7 @@ See [Authenticating IPC Access](../../management/aaa-infrastructure.md#authentic Secure communication with managed devices: * Use [Cisco-provided NEDs](../../management/ned-administration.md) when possible. -* Refer to the [examples.ncs/getting-started/netsim-sshkey](https://github.com/NSO-developer/nso-examples/tree/6.5/getting-started/netsim-sshkey) README, which references a deployment variant of the example for SSH key update patterns using nano services. +* Refer to the [examples.ncs/getting-started/netsim-sshkey](https://github.com/NSO-developer/nso-examples/tree/6.6/getting-started/netsim-sshkey) README, which references a deployment variant of the example for SSH key update patterns using nano services. ## Cryptographic Key Management diff --git a/administration/installation-and-deployment/post-install-actions/explore-the-installation.md b/administration/installation-and-deployment/post-install-actions/explore-the-installation.md index 2125d214..11adb134 100644 --- a/administration/installation-and-deployment/post-install-actions/explore-the-installation.md +++ b/administration/installation-and-deployment/post-install-actions/explore-the-installation.md @@ -41,7 +41,7 @@ Run `index.html` in your browser to explore further. ### Examples -Local Install comes with a rich set of [examples](https://github.com/NSO-developer/nso-examples/tree/6.5) to start using NSO. +Local Install comes with a rich set of [examples](https://github.com/NSO-developer/nso-examples/tree/6.6) to start using NSO. ```bash $ ls -1 examples.ncs/ @@ -81,7 +81,7 @@ juniper-junos-nc-3.0 ``` {% hint style="info" %} -The example NEDs included in the installer are intended for evaluation, demonstration, and use with the [examples.ncs](https://github.com/NSO-developer/nso-examples/tree/6.5) examples. These are not the latest versions available and often do not have all the features available in production NEDs. +The example NEDs included in the installer are intended for evaluation, demonstration, and use with the [examples.ncs](https://github.com/NSO-developer/nso-examples/tree/6.6) examples. These are not the latest versions available and often do not have all the features available in production NEDs. {% endhint %} #### **Install New NEDs** diff --git a/administration/installation-and-deployment/post-install-actions/modify-examples-for-system-install.md b/administration/installation-and-deployment/post-install-actions/modify-examples-for-system-install.md index 58bf7acd..46efe5da 100644 --- a/administration/installation-and-deployment/post-install-actions/modify-examples-for-system-install.md +++ b/administration/installation-and-deployment/post-install-actions/modify-examples-for-system-install.md @@ -12,7 +12,7 @@ Since all the NSO examples and README steps that come with the installer are pri To work with the System Install structure, this may require a little or bigger modification depending on the example. -For example, to port the [example.ncs/nano-services/basic-vrouter](https://github.com/NSO-developer/nso-examples/tree/6.5/nano-services/basic-vrouter) example to the System Install structure: +For example, to port the [example.ncs/nano-services/basic-vrouter](https://github.com/NSO-developer/nso-examples/tree/6.6/nano-services/basic-vrouter) example to the System Install structure: 1. Make the following changes to the `basic-vrouter/ncs.conf` file: diff --git a/administration/installation-and-deployment/post-install-actions/running-nso-examples.md b/administration/installation-and-deployment/post-install-actions/running-nso-examples.md index 1e61d070..ee16338c 100644 --- a/administration/installation-and-deployment/post-install-actions/running-nso-examples.md +++ b/administration/installation-and-deployment/post-install-actions/running-nso-examples.md @@ -11,7 +11,7 @@ Applies to Local Install. This section provides an overview of how to run the examples provided with the NSO installer. By working through the examples, the reader should get a good overview of the various aspects of NSO and hands-on experience from interacting with it. {% hint style="info" %} -This section references the examples located in [$NCS\_DIR/examples.ncs](https://github.com/NSO-developer/nso-examples/tree/6.5). The examples all have `README` files that include instructions related to the example. +This section references the examples located in [$NCS\_DIR/examples.ncs](https://github.com/NSO-developer/nso-examples/tree/6.6). The examples all have `README` files that include instructions related to the example. {% endhint %} ## General Instructions diff --git a/administration/installation-and-deployment/upgrade-nso.md b/administration/installation-and-deployment/upgrade-nso.md index 90c47f36..13b05ed4 100644 --- a/administration/installation-and-deployment/upgrade-nso.md +++ b/administration/installation-and-deployment/upgrade-nso.md @@ -32,7 +32,7 @@ In case it turns out that any of the packages are incompatible or cannot be reco Additional preparation steps may be required based on the upgrade and the actual setup, such as when using the Layered Service Architecture (LSA) feature. In particular, for a major NSO upgrade in a multi-version LSA cluster, ensure that the new version supports the other cluster members and follow the additional steps outlined in [Deploying LSA](../advanced-topics/layered-service-architecture.md#deploying-lsa) in Layered Service Architecture. -If you use the High Availability (HA) feature, the upgrade consists of multiple steps on different nodes. To avoid mistakes, you are encouraged to script the process, for which you will need to set up and verify access to all NSO instances with either `ssh`, `nct`, or some other remote management command. For the reference example, we use in this chapter, see [examples.ncs/high-availability/hcc](https://github.com/NSO-developer/nso-examples/tree/6.5/high-availability/hcc). The management station uses shell and Python scripts that use `ssh` to access the Linux shell and NSO CLI and Python Requests for NSO RESTCONF interface access. +If you use the High Availability (HA) feature, the upgrade consists of multiple steps on different nodes. To avoid mistakes, you are encouraged to script the process, for which you will need to set up and verify access to all NSO instances with either `ssh`, `nct`, or some other remote management command. For the reference example, we use in this chapter, see [examples.ncs/high-availability/hcc](https://github.com/NSO-developer/nso-examples/tree/6.6/high-availability/hcc). The management station uses shell and Python scripts that use `ssh` to access the Linux shell and NSO CLI and Python Requests for NSO RESTCONF interface access. Likewise, NSO 5.3 added support for 256-bit AES encrypted strings, requiring the AES256CFB128 key in the `ncs.conf` configuration. You can generate one with the `openssl rand -hex 32` or a similar command. Alternatively, if you use an external command to provide keys, ensure that it includes a value for an `AES256CFB128_KEY` in the output. @@ -418,9 +418,9 @@ To further reduce time spent upgrading, you can customize the script to install You can use the same script for a maintenance upgrade as-is, with an empty `packages-MAJORVERSION` directory, or remove the `upgrade_packages` calls from the script. -Example implementations that use scripts to upgrade a 2- and 3-node setup using CLI/MAAPI or RESTCONF are available in the NSO example set under [examples.ncs/high-availability](https://github.com/NSO-developer/nso-examples/tree/6.5/high-availability). +Example implementations that use scripts to upgrade a 2- and 3-node setup using CLI/MAAPI or RESTCONF are available in the NSO example set under [examples.ncs/high-availability](https://github.com/NSO-developer/nso-examples/tree/6.6/high-availability). -We have been using a two-node HCC layer-2 upgrade reference example elsewhere in the documentation to demonstrate installing NSO and adding the initial configuration. The upgrade-l2 example referenced in [examples.ncs/high-availability/hcc](https://github.com/NSO-developer/nso-examples/tree/6.5/high-availability/hcc) implements shell and Python scripted steps to upgrade the NSO version using `ssh` to the Linux shell and the NSO CLI or Python Requests RESTCONF for accessing the `paris` and `london` nodes. See the example for details. +We have been using a two-node HCC layer-2 upgrade reference example elsewhere in the documentation to demonstrate installing NSO and adding the initial configuration. The upgrade-l2 example referenced in [examples.ncs/high-availability/hcc](https://github.com/NSO-developer/nso-examples/tree/6.6/high-availability/hcc) implements shell and Python scripted steps to upgrade the NSO version using `ssh` to the Linux shell and the NSO CLI or Python Requests RESTCONF for accessing the `paris` and `london` nodes. See the example for details. If you do not wish to automate the upgrade process, you will need to follow the instructions from [Single Instance Upgrade](upgrade-nso.md#ug.admin_guide.manual_upgrade) and transfer the required files to each host manually. Additional information on HA is available in [High Availability](../management/high-availability.md). However, you can run the `high-availability` actions from the preceding script on the NSO CLI as-is. In this case, please take special care of which host you perform each command, as it can be easy to mix them up. @@ -488,9 +488,9 @@ The `packages ha sync and-reload` command has the following known limitations an * The `primary` node is set to `read-only` mode before the upgrade starts, and it is set back to its previous mode if the upgrade is successfully upgraded. However, the node will always be in read-write mode if an error occurs during the upgrade. It is up to the user to set the node back to the desired mode by using the `high-availability read-only mode` command. * As a best practice, you should create a backup of all nodes before upgrading. This action creates no backups, you must do that explicitly. -Example implementations that use scripts to upgrade a 2- and 3-node setup using CLI/MAAPI or RESTCONF are available in the NSO example set under [examples.ncs/high-availability](https://github.com/NSO-developer/nso-examples/tree/6.5/high-availabilit). +Example implementations that use scripts to upgrade a 2- and 3-node setup using CLI/MAAPI or RESTCONF are available in the NSO example set under [examples.ncs/high-availability](https://github.com/NSO-developer/nso-examples/tree/6.6/high-availabilit). -We have been using a two-node HCC layer 2 upgrade reference example elsewhere in the documentation to demonstrate installing NSO and adding the initial configuration. The `upgrade-l2` example referenced in [examples.ncs/high-availability/hcc](https://github.com/NSO-developer/nso-examples/tree/6.5/high-availability/hcc) implements shell and Python scripted steps to upgrade the `primary` `paris` package versions and sync the packages to the `secondary` `london` using `ssh` to the Linux shell and the NSO CLI or Python Requests RESTCONF for accessing the `paris` and `london` nodes. See the example for details. +We have been using a two-node HCC layer 2 upgrade reference example elsewhere in the documentation to demonstrate installing NSO and adding the initial configuration. The `upgrade-l2` example referenced in [examples.ncs/high-availability/hcc](https://github.com/NSO-developer/nso-examples/tree/6.6/high-availability/hcc) implements shell and Python scripted steps to upgrade the `primary` `paris` package versions and sync the packages to the `secondary` `london` using `ssh` to the Linux shell and the NSO CLI or Python Requests RESTCONF for accessing the `paris` and `london` nodes. See the example for details. In some cases, NSO may warn when the upgrade looks suspicious. For more information on this, see [Loading Packages](../management/package-mgmt.md#ug.package_mgmt.loading). If you understand the implications and are willing to risk losing data, use the `force` option with `packages reload` or set the `NCS_RELOAD_PACKAGES` environment variable to `force` when restarting NSO. It will force NSO to ignore warnings and proceed with the upgrade. In general, this is not recommended. diff --git a/administration/management/aaa-infrastructure.md b/administration/management/aaa-infrastructure.md index 729482f0..fe2c53ba 100644 --- a/administration/management/aaa-infrastructure.md +++ b/administration/management/aaa-infrastructure.md @@ -609,7 +609,7 @@ NSO will skip this access check in case the euid of the connecting process is 0 If using Unix socket IPC, clients and client libraries must now specify the path that identifies the socket. The path must match the one set under `ncs-local-ipc/path` in `ncs.conf`. Clients may expose a client-specific way to set it, such as the `-S` option of the `ncs_cli` command. Alternatively, you can use the `NCS_IPC_PATH` environment variable to specify the socket path independently of the used client. -See [examples.ncs/aaa/ipc](https://github.com/NSO-developer/nso-examples/tree/6.5/aaa/ipc) for a working example. +See [examples.ncs/aaa/ipc](https://github.com/NSO-developer/nso-examples/tree/6.6/aaa/ipc) for a working example. ## Group Membership diff --git a/administration/management/high-availability.md b/administration/management/high-availability.md index c77d106f..ae2a3537 100644 --- a/administration/management/high-availability.md +++ b/administration/management/high-availability.md @@ -34,9 +34,9 @@ Compared to traditional fail-over HA solutions, Raft relies on the consensus of Raft achieves robustness by requiring at least three nodes in the HA cluster. Three is the recommended cluster size, allowing the cluster to operate in the face of a single node failure. In case you need to tolerate two nodes failing simultaneously, you can add two additional nodes, for a 5-node cluster. However, permanently having more than five nodes in a single cluster is currently not recommended since Raft requires the majority of the currently configured nodes in the cluster to reach consensus. Without the consensus, the cluster cannot function. -You can start a sample HA Raft cluster using the [examples.ncs/high-availability/raft-cluster](https://github.com/NSO-developer/nso-examples/tree/6.5/high-availability/raft-cluster) example to test it out. The scripts in the example show various aspects of cluster setup and operation, which are further described in the rest of this section. +You can start a sample HA Raft cluster using the [examples.ncs/high-availability/raft-cluster](https://github.com/NSO-developer/nso-examples/tree/6.6/high-availability/raft-cluster) example to test it out. The scripts in the example show various aspects of cluster setup and operation, which are further described in the rest of this section. -Optionally, examples using separate containers for each HA Raft cluster member with NSO system installations are available and referenced in the [examples.ncs/high-availability/hcc](https://github.com/NSO-developer/nso-examples/blob/6.4/high-availability/hcc) example in the NSO example set. +Optionally, examples using separate containers for each HA Raft cluster member with NSO system installations are available and referenced in the [examples.ncs/high-availability/hcc](https://github.com/NSO-developer/nso-examples/tree/6.6/high-availability/hcc) example in the NSO example set. ### Overview of Raft Operation @@ -72,9 +72,9 @@ In most cases, this means the `ADDRESS` must appear in the node certificate's Su Create and use a self-signed CA to secure the NSO HA Raft cluster. A self-signed CA is the only secure option. The CA should only be used to sign the certificates of the member nodes in one NSO HA Raft cluster. It is critical for security that the CA is not used to sign any other certificates. Any certificate signed by the CA can be used to gain complete control of the NSO HA Raft cluster. -See the [examples.ncs/high-availability/raft-cluster](https://github.com/NSO-developer/nso-examples/tree/6.5/high-availability/raft-cluster) example for one way to set up a self-signed CA and provision individual node certificates. The example uses a shell script `gen_tls_certs.sh` that invokes the `openssl` command. Consult the section [Recipe for a Self-signed CA](high-availability.md#recipe-for-a-self-signed-ca) for using it independently of the example. +See the [examples.ncs/high-availability/raft-cluster](https://github.com/NSO-developer/nso-examples/tree/6.6/high-availability/raft-cluster) example for one way to set up a self-signed CA and provision individual node certificates. The example uses a shell script `gen_tls_certs.sh` that invokes the `openssl` command. Consult the section [Recipe for a Self-signed CA](high-availability.md#recipe-for-a-self-signed-ca) for using it independently of the example. -Examples using separate containers for each HA Raft cluster member with NSO system installations that use a variant of the `gen_tls_certs.sh` script are available and referenced in the [examples.ncs/high-availability/hcc](https://github.com/NSO-developer/nso-examples/tree/6.5/high-availability/hcc) example in the NSO example set. +Examples using separate containers for each HA Raft cluster member with NSO system installations that use a variant of the `gen_tls_certs.sh` script are available and referenced in the [examples.ncs/high-availability/hcc](https://github.com/NSO-developer/nso-examples/tree/6.6/high-availability/hcc) example in the NSO example set. {% hint style="info" %} When using an IP address instead of a DNS name for node's `ADDRESS`, you must add the IP address to the certificate's dNSName SAN field (adding it to iPAddress field only is insufficient). This is a known limitation in the current version. @@ -110,7 +110,7 @@ The recipe makes the following assumptions: To use this recipe: -* First prepare a working environment on a secure host by creating a new directory and copying the `gen_tls_certs.sh` script from [examples.ncs/high-availability/raft-cluster](https://github.com/NSO-developer/nso-examples/tree/6.5/high-availability/raft-cluster) into it. Additionally, ensure that the `openssl` command, version 1.1 or later, is available and the system time is set correctly. Supposing that you have a cluster named `lower-west`, you might run: +* First prepare a working environment on a secure host by creating a new directory and copying the `gen_tls_certs.sh` script from [examples.ncs/high-availability/raft-cluster](https://github.com/NSO-developer/nso-examples/tree/6.6/high-availability/raft-cluster) into it. Additionally, ensure that the `openssl` command, version 1.1 or later, is available and the system time is set correctly. Supposing that you have a cluster named `lower-west`, you might run: ```bash $ mkdir raft-ca-lower-west @@ -418,7 +418,7 @@ For the full procedure, first, ensure all cluster nodes are up and operational, Note that while the upgrade is in progress, writes to the CDB are not allowed and will be rejected. -For a `packages ha sync and-reload` example see the `raft-upgrade-l2` NSO system installation-based example referenced by the [examples.ncs/high-availability/hcc](https://github.com/NSO-developer/nso-examples/tree/6.5/high-availability/hcc) example in the NSO example set. +For a `packages ha sync and-reload` example see the `raft-upgrade-l2` NSO system installation-based example referenced by the [examples.ncs/high-availability/hcc](https://github.com/NSO-developer/nso-examples/tree/6.6/high-availability/hcc) example in the NSO example set. For more details, troubleshooting, and general upgrade recommendations, see [NSO Packages](package-mgmt.md) and [Upgrade](../installation-and-deployment/upgrade-nso.md). @@ -446,7 +446,7 @@ The procedure differentiates between the current leader node versus followers. T For a standard System Install, the single-node procedure is described in [Single Instance Upgrade](../installation-and-deployment/upgrade-nso.md#ug.admin_guide.manual_upgrade), but in general depends on the NSO deployment type. For example, it will be different for containerized environments. For specifics, please refer to the documentation for the deployment type. -For an example see the `raft-upgrade-l2` NSO system installation-based example referenced by the [examples.ncs/high-availability/hcc](https://github.com/NSO-developer/nso-examples/tree/6.5/high-availability/hcc) example in the NSO example set. +For an example see the `raft-upgrade-l2` NSO system installation-based example referenced by the [examples.ncs/high-availability/hcc](https://github.com/NSO-developer/nso-examples/tree/6.6/high-availability/hcc) example in the NSO example set. If the upgrade fails before or during the upgrade of the original leader, start up the original followers to restore service and then restore the original leader, using backup as necessary. @@ -507,7 +507,7 @@ In an NSO System Install setup, not only does the shared token need to match bet The token configured on the secondary node is overwritten with the encrypted token of type `aes-256-cfb-128-encrypted-string` from the primary node when the secondary node connects to the primary. If there is a mismatch between the encrypted-string configuration on the nodes, NSO will not decrypt the HA token to match the token presented. As a result, the primary node denies the secondary node access the next time the HA connection needs to reestablish with a "Token mismatch, secondary is not allowed" error. -See the `upgrade-l2` example, referenced from [examples.ncs/high-availability/hcc](https://github.com/NSO-developer/nso-examples/tree/6.5/high-availability/hcc), for an example setup and the [Deployment Example](../installation-and-deployment/deployment/deployment-example.md) for a description of the example. +See the `upgrade-l2` example, referenced from [examples.ncs/high-availability/hcc](https://github.com/NSO-developer/nso-examples/tree/6.6/high-availability/hcc), for an example setup and the [Deployment Example](../installation-and-deployment/deployment/deployment-example.md) for a description of the example. Also, note that the `ncs.crypto_keys` file is highly sensitive. The file contains the encryption keys for all CDB data that is encrypted on disk. Besides the HA token, this often includes passwords for various entities, such as login credentials to managed devices. @@ -684,7 +684,7 @@ HCC 5.x or later automatically associates VIP addresses with Linux network inter Since version 5.0, HCC relies on the NSO built-in HA for cluster management and only performs address or route management in reaction to cluster changes. Therefore, no special measures are necessary if using HCC when performing an NSO version upgrade or a package upgrade. Instead, you should follow the standard best practice HA upgrade procedure from [NSO HA Version Upgrade](../installation-and-deployment/upgrade-nso.md#ch_upgrade.ha). -A reference to upgrade examples can be found in the README under [examples.ncs/high-availability/hcc](https://github.com/NSO-developer/nso-examples/tree/6.5/high-availability/hcc). +A reference to upgrade examples can be found in the README under [examples.ncs/high-availability/hcc](https://github.com/NSO-developer/nso-examples/tree/6.6/high-availability/hcc). ### Layer-2 @@ -854,7 +854,7 @@ This section describes basic deployment scenarios for HCC. Layer-2 mode is demon * [Enabling Layer-3 BGP](high-availability.md#enabling-layer-3-bgp) * [Enabling Layer-3 DNS](high-availability.md#enabling-layer-3-dns) -A reference to container-based examples for the layer-2 and layer-3 deployment scenarios described here can be found in the NSO example set under [examples.ncs/high-availability/hcc](https://github.com/NSO-developer/nso-examples/tree/6.5/high-availability/hcc). +A reference to container-based examples for the layer-2 and layer-3 deployment scenarios described here can be found in the NSO example set under [examples.ncs/high-availability/hcc](https://github.com/NSO-developer/nso-examples/tree/6.6/high-availability/hcc). Both scenarios consist of two test nodes: `london` and `paris` with a single IPv4 VIP address. For the layer-2 scenario, the nodes are on the same network. The layer-3 scenario also involves a BGP-enabled `router` node as the `london` and `paris` nodes are on two different networks. @@ -916,7 +916,7 @@ root@london:~# ip address list Layer-2 Example Implementation: -A reference to a container-based example of the layer-2 scenario can be found in the NSO example set. See the [examples.ncs/high-availability/hcc](https://github.com/NSO-developer/nso-examples/tree/6.5/high-availability/hcc) `README`. +A reference to a container-based example of the layer-2 scenario can be found in the NSO example set. See the [examples.ncs/high-availability/hcc](https://github.com/NSO-developer/nso-examples/tree/6.6/high-availability/hcc) `README`. #### **Enabling Layer-3 BGP** @@ -986,7 +986,7 @@ The VIP subnet is routed to the `paris` host, which is the primary node. Layer-3 BGP Example Implementation: -A reference to a container-based example of the combined layer-2 and layer-3 BGP scenario can be found in the NSO example set. See the [examples.ncs/high-availability/hcc](https://github.com/NSO-developer/nso-examples/tree/6.5/high-availability/hcc) `README`. +A reference to a container-based example of the combined layer-2 and layer-3 BGP scenario can be found in the NSO example set. See the [examples.ncs/high-availability/hcc](https://github.com/NSO-developer/nso-examples/tree/6.6/high-availability/hcc) `README`. #### **Enabling Layer-3 DNS** @@ -1043,7 +1043,7 @@ As an alternative to the HCC package, NSO built-in HA, either rule-based or HA R

Load Balancer Routes Connections to the Appropriate NSO Node

-The load balancer uses HTTP health checks to determine which node is currently the active primary. The example, found in the [examples.ncs/high-availability/load-balancer](https://github.com/NSO-developer/nso-examples/tree/6.5/high-availability/load-balancer) directory uses HTTP status codes on the health check endpoint to easily distinguish whether the node is currently primary or not. +The load balancer uses HTTP health checks to determine which node is currently the active primary. The example, found in the [examples.ncs/high-availability/load-balancer](https://github.com/NSO-developer/nso-examples/tree/6.6/high-availability/load-balancer) directory uses HTTP status codes on the health check endpoint to easily distinguish whether the node is currently primary or not. In the example, freely available HAProxy software is used as a load balancer to demonstrate the functionality. It is configured to steer connections on localhost to either of the TCP port 2024 (SSH CLI) and TCP port 8080 (web UI and RESTCONF) to the active node in a 2-node HA cluster. The HAProxy software is required if you wish to run this example yourself. diff --git a/administration/management/ned-administration.md b/administration/management/ned-administration.md index a9f558cf..a7eeeaba 100644 --- a/administration/management/ned-administration.md +++ b/administration/management/ned-administration.md @@ -416,7 +416,7 @@ If applying the steps for this example on a production system, you should first ### Prepare the Example -This guide uses the MPLS VPN example in Python from the NSO example set under [examples.ncs/service-management/mpls-vpn-python](https://github.com/NSO-developer/nso-examples/tree/6.5/service-management/mpls-vpn-python) to demonstrate porting an existing application to use the `juniper-junos_nc` NED. The simulated Junos device is replaced with a Junos vMX 21.1R1.11 container, but other NETCONF/YANG-compliant Junos versions also work. +This guide uses the MPLS VPN example in Python from the NSO example set under [examples.ncs/service-management/mpls-vpn-python](https://github.com/NSO-developer/nso-examples/tree/6.6/service-management/mpls-vpn-python) to demonstrate porting an existing application to use the `juniper-junos_nc` NED. The simulated Junos device is replaced with a Junos vMX 21.1R1.11 container, but other NETCONF/YANG-compliant Junos versions also work. ### **Add the `juniper-junos` and `juniper-junos_nc` NED Packages** @@ -958,6 +958,6 @@ However, there is a major downside to this approach. While the exact revision is If you still wish to use this functionality, you can create a NED package with the `ncs-make-package --netconf-ned` command as you would otherwise. However, the supplied source YANG directory should contain YANG modules with different revisions. The files should follow the _`module-or-submodule-name`_`@`_`revision-date`_`.yang` naming convention, as specified in the RFC6020. Some versions of the compiler require you to use the `--no-fail-on-warnings` option with the `ncs-make-package` command or the build process may fail. -The [examples.ncs/device-management/ned-yang-revision](https://github.com/NSO-developer/nso-examples/tree/6.5/device-management/ned-yang-revision) example shows how you can perform a YANG model upgrade. The original, 1.0 version of the router NED uses the `router@2020-02-27.yang` YANG model. First, it is updated to the version 1.0.1 `router@2020-09-18.yang` using a revision merge approach. This is possible because the changes are backward-compatible. +The [examples.ncs/device-management/ned-yang-revision](https://github.com/NSO-developer/nso-examples/tree/6.6/device-management/ned-yang-revision) example shows how you can perform a YANG model upgrade. The original, 1.0 version of the router NED uses the `router@2020-02-27.yang` YANG model. First, it is updated to the version 1.0.1 `router@2020-09-18.yang` using a revision merge approach. This is possible because the changes are backward-compatible. In the second part of the example, the updates in `router@2022-01-25.yang` introduce breaking changes, therefore the version is increased to 1.1 and a different NED-ID is assigned to the NED. In this case, you can't use revision merge and the usual NED migration procedure is required. diff --git a/administration/management/package-mgmt.md b/administration/management/package-mgmt.md index 4510b55a..cf4fbbff 100644 --- a/administration/management/package-mgmt.md +++ b/administration/management/package-mgmt.md @@ -150,7 +150,7 @@ show-tag interface So the above command shows that NSO currently has one package, the NED for Cisco IOS. -NSO reads global configuration parameters from `ncs.conf`. More on NSO configuration later in this guide. By default, it tells NSO to look for packages in a `packages` directory where NSO was started. Using the [examples.ncs/device-management/simulated-cisco-ios](https://github.com/NSO-developer/nso-examples/blob/6.4/device-management/simulated-cisco-ios) example to demonstrate: +NSO reads global configuration parameters from `ncs.conf`. More on NSO configuration later in this guide. By default, it tells NSO to look for packages in a `packages` directory where NSO was started. Using the [examples.ncs/device-management/simulated-cisco-ios](https://github.com/NSO-developer/nso-examples/tree/6.6/device-management/simulated-cisco-ios) example to demonstrate: ```bash $ pwd diff --git a/administration/management/system-management/README.md b/administration/management/system-management/README.md index 2a52943a..20a2e6fa 100644 --- a/administration/management/system-management/README.md +++ b/administration/management/system-management/README.md @@ -330,11 +330,11 @@ NSO logs in `/logs` in your running directory, (depends on your settings in `ncs ``` * Progress trace log: When a transaction or action is applied, NSO emits specific progress events. These events can be displayed and recorded in a number of different ways, either in CLI with the pipe target `details` on a commit, or by writing it to a log file. You can read more about it in the [Progress Trace](../../../development/advanced-development/progress-trace.md). * Transaction error log: log for collecting information on failed transactions that lead to either a CDB boot error or a runtime transaction failure. The default is `false` (disabled). More information about the log is available in the Manual Pages under [Configuration Parameters](../../../resources/man/ncs.conf.5.md#configuration-parameters) (see `logs/transaction-error-log`). -* Upgrade log: log containing information about CDB upgrade. The log is enabled by default and not rotated (i.e., use logrotate). With the NSO example set, the following examples populate the log in the `logs/upgrade.log` file: [examples.ncs/device-management/ned-yang-revision](https://github.com/NSO-developer/nso-examples/tree/6.5/device-management/ned-yang-revision), [examples.ncs/high-availability/upgrade-basic](https://github.com/NSO-developer/nso-examples/tree/6.5/high-availability/upgrade-basic), [examples.ncs/high-availability/upgrade-cluster](https://github.com/NSO-developer/nso-examples/tree/6.5/high-availability/upgrade-cluster), and [examples.ncs/service-management/upgrade-service](https://github.com/NSO-developer/nso-examples/tree/6.5/service-management/upgrade-service). More information about the log is available in the Manual Pages under [Configuration Parameters](../../../resources/man/ncs.conf.5.md#configuration-parameters) (see `logs/upgrade-log)`. +* Upgrade log: log containing information about CDB upgrade. The log is enabled by default and not rotated (i.e., use logrotate). With the NSO example set, the following examples populate the log in the `logs/upgrade.log` file: [examples.ncs/device-management/ned-yang-revision](https://github.com/NSO-developer/nso-examples/tree/6.6/device-management/ned-yang-revision), [examples.ncs/high-availability/upgrade-basic](https://github.com/NSO-developer/nso-examples/tree/6.6/high-availability/upgrade-basic), [examples.ncs/high-availability/upgrade-cluster](https://github.com/NSO-developer/nso-examples/tree/6.6/high-availability/upgrade-cluster), and [examples.ncs/service-management/upgrade-service](https://github.com/NSO-developer/nso-examples/tree/6.6/service-management/upgrade-service). More information about the log is available in the Manual Pages under [Configuration Parameters](../../../resources/man/ncs.conf.5.md#configuration-parameters) (see `logs/upgrade-log)`. ### Syslog -NSO can syslog to a local Syslog. See `man ncs.conf` how to configure the Syslog settings. All Syslog messages are documented in Log Messages. The `ncs.conf` also lets you decide which of the logs should go into Syslog: `ncs.log, devel.log, netconf.log, snmp.log, audit.log, WebUI access log`. There is also a possibility to integrate with `rsyslog` to log the NCS, developer, audit, netconf, SNMP, and WebUI access logs to syslog with the facility set to daemon in `ncs.conf`. For reference, see the `upgrade-l2` example [examples.ncs/high-availability/hcc](https://github.com/NSO-developer/nso-examples/tree/6.5/high-availability/hcc) . +NSO can syslog to a local Syslog. See `man ncs.conf` how to configure the Syslog settings. All Syslog messages are documented in Log Messages. The `ncs.conf` also lets you decide which of the logs should go into Syslog: `ncs.log, devel.log, netconf.log, snmp.log, audit.log, WebUI access log`. There is also a possibility to integrate with `rsyslog` to log the NCS, developer, audit, netconf, SNMP, and WebUI access logs to syslog with the facility set to daemon in `ncs.conf`. For reference, see the `upgrade-l2` example [examples.ncs/high-availability/hcc](https://github.com/NSO-developer/nso-examples/tree/6.6/high-availability/hcc) . Below is an example of Syslog configuration: @@ -367,7 +367,7 @@ NSO generates alarms for serious problems that must be remedied. Alarms are avai The NSO alarm manager also presents a northbound SNMP view, alarms can be retrieved as an alarm table, and alarm state changes are reported as SNMP Notifications. See the "NSO Northbound" documentation on how to configure the SNMP Agent. -This is also documented in the example [examples.ncs/northbound-interfaces/snmp-alarm](https://github.com/NSO-developer/nso-examples/tree/6.5/northbound-interfaces/snmp-alarm). +This is also documented in the example [examples.ncs/northbound-interfaces/snmp-alarm](https://github.com/NSO-developer/nso-examples/tree/6.6/northbound-interfaces/snmp-alarm). Alarms are described on the link below: diff --git a/administration/management/system-management/log-messages-and-formats.md b/administration/management/system-management/log-messages-and-formats.md index 2435a193..31b4d3a6 100644 --- a/administration/management/system-management/log-messages-and-formats.md +++ b/administration/management/system-management/log-messages-and-formats.md @@ -243,64 +243,64 @@
-CANDIDATE_BAD_FILE_FORMAT +CAND_COMMIT_ROLLBACK_DONE -CANDIDATE_BAD_FILE_FORMAT +CAND_COMMIT_ROLLBACK_DONE * **Severity** - `WARNING` + `INFO` * **Description** - The candidate database file has a bad format. The candidate database is reset to the empty database. + Candidate commit rollback done * **Format String** - `"Bad format found in candidate db file ~s; resetting candidate"` + `"Candidate commit rollback done"`
-CANDIDATE_CORRUPT_FILE +CAND_COMMIT_ROLLBACK_FAILURE -CANDIDATE_CORRUPT_FILE +CAND_COMMIT_ROLLBACK_FAILURE * **Severity** - `WARNING` + `ERR` * **Description** - The candidate database file is corrupt and cannot be read. The candidate database is reset to the empty database. + Failed to rollback candidate commit * **Format String** - `"Corrupt candidate db file ~s; resetting candidate"` + `"Failed to rollback candidate commit due to: ~s"`
-CAND_COMMIT_ROLLBACK_DONE +CANDIDATE_BAD_FILE_FORMAT -CAND_COMMIT_ROLLBACK_DONE +CANDIDATE_BAD_FILE_FORMAT * **Severity** - `INFO` + `WARNING` * **Description** - Candidate commit rollback done + The candidate database file has a bad format. The candidate database is reset to the empty database. * **Format String** - `"Candidate commit rollback done"` + `"Bad format found in candidate db file ~s; resetting candidate"`
-CAND_COMMIT_ROLLBACK_FAILURE +CANDIDATE_CORRUPT_FILE -CAND_COMMIT_ROLLBACK_FAILURE +CANDIDATE_CORRUPT_FILE * **Severity** - `ERR` + `WARNING` * **Description** - Failed to rollback candidate commit + The candidate database file is corrupt and cannot be read. The candidate database is reset to the empty database. * **Format String** - `"Failed to rollback candidate commit due to: ~s"` + `"Corrupt candidate db file ~s; resetting candidate"`
@@ -531,48 +531,48 @@
-CLI_CMD +CLI_CMD_ABORTED -CLI_CMD +CLI_CMD_ABORTED * **Severity** `INFO` * **Description** - User executed a CLI command. + CLI command aborted. * **Format String** - `"CLI '~s'"` + `"CLI aborted"`
-CLI_CMD_ABORTED +CLI_CMD_DONE -CLI_CMD_ABORTED +CLI_CMD_DONE * **Severity** `INFO` * **Description** - CLI command aborted. + CLI command finished successfully. * **Format String** - `"CLI aborted"` + `"CLI done"`
-CLI_CMD_DONE +CLI_CMD -CLI_CMD_DONE +CLI_CMD * **Severity** `INFO` * **Description** - CLI command finished successfully. + User executed a CLI command. * **Format String** - `"CLI done"` + `"CLI '~s'"`
@@ -1011,16 +1011,16 @@
-EXTAUTH_BAD_RET +EXT_AUTH_2FA_FAIL -EXTAUTH_BAD_RET +EXT_AUTH_2FA_FAIL * **Severity** - `ERR` + `INFO` * **Description** - Authentication is external and the external program returned badly formatted data. + External challenge authentication failed for a user. * **Format String** - `"External auth program (user=~s) ret bad output: ~s"` + `"external challenge authentication failed via ~s from ~s with ~s: ~s"`
@@ -1043,32 +1043,32 @@
-EXT_AUTH_2FA_FAIL +EXT_AUTH_2FA_SUCCESS -EXT_AUTH_2FA_FAIL +EXT_AUTH_2FA_SUCCESS * **Severity** `INFO` * **Description** - External challenge authentication failed for a user. + An external challenge authenticated user logged in. * **Format String** - `"external challenge authentication failed via ~s from ~s with ~s: ~s"` + `"external challenge authentication succeeded via ~s from ~s with ~s, member of groups: ~s~s"`
-EXT_AUTH_2FA_SUCCESS +EXTAUTH_BAD_RET -EXT_AUTH_2FA_SUCCESS +EXTAUTH_BAD_RET * **Severity** - `INFO` + `ERR` * **Description** - An external challenge authenticated user logged in. + Authentication is external and the external program returned badly formatted data. * **Format String** - `"external challenge authentication succeeded via ~s from ~s with ~s, member of groups: ~s~s"` + `"External auth program (user=~s) ret bad output: ~s"`
@@ -1187,32 +1187,32 @@
-FILE_LOADING +FILE_LOAD_ERR -FILE_LOADING +FILE_LOAD_ERR * **Severity** - `DEBUG` + `CRIT` * **Description** - System starts to load a file. + System tried to load a file in its load path and failed. * **Format String** - `"Loading file ~s"` + `"Failed to load file ~s: ~s"`
-FILE_LOAD_ERR +FILE_LOADING -FILE_LOAD_ERR +FILE_LOADING * **Severity** - `CRIT` + `DEBUG` * **Description** - System tried to load a file in its load path and failed. + System starts to load a file. * **Format String** - `"Failed to load file ~s: ~s"` + `"Loading file ~s"`
@@ -1411,48 +1411,48 @@
-JSONRPC_REQUEST +JSONRPC_REQUEST_ABSOLUTE_TIMEOUT -JSONRPC_REQUEST +JSONRPC_REQUEST_ABSOLUTE_TIMEOUT * **Severity** `INFO` * **Description** - JSON-RPC method requested. + JSON-RPC absolute timeout. * **Format String** - `"JSON-RPC: '~s' with JSON params ~s"` + `"Stopping session due to absolute timeout: ~s"`
-JSONRPC_REQUEST_ABSOLUTE_TIMEOUT +JSONRPC_REQUEST_IDLE_TIMEOUT -JSONRPC_REQUEST_ABSOLUTE_TIMEOUT +JSONRPC_REQUEST_IDLE_TIMEOUT * **Severity** `INFO` * **Description** - JSON-RPC absolute timeout. + JSON-RPC idle timeout. * **Format String** - `"Stopping session due to absolute timeout: ~s"` + `"Stopping session due to idle timeout: ~s"`
-JSONRPC_REQUEST_IDLE_TIMEOUT +JSONRPC_REQUEST -JSONRPC_REQUEST_IDLE_TIMEOUT +JSONRPC_REQUEST * **Severity** `INFO` * **Description** - JSON-RPC idle timeout. + JSON-RPC method requested. * **Format String** - `"Stopping session due to idle timeout: ~s"` + `"JSON-RPC: '~s' with JSON params ~s"`
@@ -1555,14 +1555,14 @@
-LOCAL_AUTH_FAIL +LOCAL_AUTH_FAIL_BADPASS -LOCAL_AUTH_FAIL +LOCAL_AUTH_FAIL_BADPASS * **Severity** `INFO` * **Description** - Authentication for a locally configured user failed. + Authentication for a locally configured user failed due to providing bad password. * **Format String** `"local authentication failed via ~s from ~s with ~s: ~s"` @@ -1571,14 +1571,14 @@
-LOCAL_AUTH_FAIL_BADPASS +LOCAL_AUTH_FAIL -LOCAL_AUTH_FAIL_BADPASS +LOCAL_AUTH_FAIL * **Severity** `INFO` * **Description** - Authentication for a locally configured user failed due to providing bad password. + Authentication for a locally configured user failed. * **Format String** `"local authentication failed via ~s from ~s with ~s: ~s"` @@ -1811,32 +1811,32 @@
-MISSING_NS +MISSING_NS2 -MISSING_NS +MISSING_NS2 * **Severity** `CRIT` * **Description** While validating the consistency of the config - a required namespace was missing. * **Format String** - `"The namespace ~s could not be found in the loadPath."` + `"The namespace ~s (referenced by ~s) could not be found in the loadPath."`
-MISSING_NS2 +MISSING_NS -MISSING_NS2 +MISSING_NS * **Severity** `CRIT` * **Description** While validating the consistency of the config - a required namespace was missing. * **Format String** - `"The namespace ~s (referenced by ~s) could not be found in the loadPath."` + `"The namespace ~s could not be found in the loadPath."`
@@ -1859,32 +1859,32 @@
-NETCONF +NETCONF_HDR_ERR -NETCONF +NETCONF_HDR_ERR * **Severity** - `INFO` + `ERR` * **Description** - NETCONF traffic log message + The cleartext header indicating user and groups was badly formatted. * **Format String** - `"~s"` + `"Got bad NETCONF TCP header"`
-NETCONF_HDR_ERR +NETCONF -NETCONF_HDR_ERR +NETCONF * **Severity** - `ERR` + `INFO` * **Description** - The cleartext header indicating user and groups was badly formatted. + NETCONF traffic log message * **Format String** - `"Got bad NETCONF TCP header"` + `"~s"`
@@ -1921,22 +1921,6 @@
-
- -NOTIFICATION_REPLAY_STORE_FAILURE - -NOTIFICATION_REPLAY_STORE_FAILURE - -* **Severity** - `CRIT` -* **Description** - A failure occurred in the builtin notification replay store -* **Format String** - `"~s"` - -
- -
NO_CALLPOINT @@ -2003,16 +1987,16 @@
-NS_LOAD_ERR +NOTIFICATION_REPLAY_STORE_FAILURE -NS_LOAD_ERR +NOTIFICATION_REPLAY_STORE_FAILURE * **Severity** `CRIT` * **Description** - System tried to process a loaded namespace and failed. + A failure occurred in the builtin notification replay store * **Format String** - `"Failed to process namespace ~s: ~s"` + `"~s"`
@@ -2033,6 +2017,22 @@
+
+ +NS_LOAD_ERR + +NS_LOAD_ERR + +* **Severity** + `CRIT` +* **Description** + System tried to process a loaded namespace and failed. +* **Format String** + `"Failed to process namespace ~s: ~s"` + +
+ +
OPEN_LOGFILE @@ -2163,64 +2163,64 @@
-RESTCONF_REQUEST +REST_AUTH_FAIL -RESTCONF_REQUEST +REST_AUTH_FAIL * **Severity** `INFO` * **Description** - RESTCONF request + Rest authentication for a user failed. * **Format String** - `"RESTCONF: request with ~s: ~s"` + `"rest authentication failed from ~s"`
-RESTCONF_RESPONSE +REST_AUTH_SUCCESS -RESTCONF_RESPONSE +REST_AUTH_SUCCESS * **Severity** `INFO` * **Description** - RESTCONF response + A rest authenticated user logged in. * **Format String** - `"RESTCONF: response with ~s: ~s duration ~s us"` + `"rest authentication succeeded from ~s , member of groups: ~s"`
-REST_AUTH_FAIL +RESTCONF_REQUEST -REST_AUTH_FAIL +RESTCONF_REQUEST * **Severity** `INFO` * **Description** - Rest authentication for a user failed. + RESTCONF request * **Format String** - `"rest authentication failed from ~s"` + `"RESTCONF: request with ~s: ~s"`
-REST_AUTH_SUCCESS +RESTCONF_RESPONSE -REST_AUTH_SUCCESS +RESTCONF_RESPONSE * **Severity** `INFO` * **Description** - A rest authenticated user logged in. + RESTCONF response * **Format String** - `"rest authentication succeeded from ~s , member of groups: ~s"` + `"RESTCONF: response with ~s: ~s duration ~s us"`
@@ -2801,22 +2801,6 @@
-
- -WEBUI_LOG_MSG - -WEBUI_LOG_MSG - -* **Severity** - `INFO` -* **Description** - WebUI access log message -* **Format String** - `"WebUI access log: ~s"` - -
- -
WEB_ACTION @@ -2865,6 +2849,22 @@
+
+ +WEBUI_LOG_MSG + +WEBUI_LOG_MSG + +* **Severity** + `INFO` +* **Description** + WebUI access log message +* **Format String** + `"WebUI access log: ~s"` + +
+ +
WRITE_STATE_FILE_FAILED @@ -3361,6 +3361,22 @@
+
+ +NCS_SNMP_INIT_ERR + +NCS_SNMP_INIT_ERR + +* **Severity** + `INFO` +* **Description** + Failed to locate snmp_init.xml in loadpath +* **Format String** + `"Failed to locate snmp_init.xml in loadpath ~s"` + +
+ +
NCS_SNMPM_START @@ -3395,16 +3411,32 @@
-NCS_SNMP_INIT_ERR +NCS_TLS_CERT_LOAD_FR_DB_ERR -NCS_SNMP_INIT_ERR +NCS_TLS_CERT_LOAD_FR_DB_ERR * **Severity** - `INFO` + `CRIT` * **Description** - Failed to locate snmp_init.xml in loadpath + Failed to load SSL/TLS certificate from database. * **Format String** - `"Failed to locate snmp_init.xml in loadpath ~s"` + `"Failed to load SSL/TLS certificate from db: ~s."` + +
+ + +
+ +NCS_TLS_CERT_LOAD_FR_FILE_ERR + +NCS_TLS_CERT_LOAD_FR_FILE_ERR + +* **Severity** + `CRIT` +* **Description** + Failed to load SSL/TLS certificate from file. +* **Format String** + `"Failed to load SSL/TLS certificate from file: ~s; Please check files specified at /ncs-config/webui/transport/ssl/cert-file or /ncs-config/webui/transport/ssl/ca-cert-file"`
diff --git a/developer-reference/erlang/econfd_cdb.md b/developer-reference/erlang/econfd_cdb.md index 3003fedb..5efe2396 100644 --- a/developer-reference/erlang/econfd_cdb.md +++ b/developer-reference/erlang/econfd_cdb.md @@ -1004,11 +1004,11 @@ The fun can return the atom 'close' if we wish to close the socket and return fr * ?CDB_DONE_TRANSACTION This means that CDB should not send any further notifications to any subscribers - including ourselves - related to the currently executing transaction. * ?CDB_DONE_OPERATIONAL This should be used when a subscription notification for operational data has been read. It is the only type that should be used in this case, since the operational data does not have transactions and the notifications do not have priorities. -Finally the arity-3 fun can, when Type == ?CDB_SUB_PREPARE, return an error either as \{error, binary()\} or as \{error, #confd_error\{\}\} (\{error, tuple()\} is only for internal ConfD/NCS use). This will cause the commit of the current transaction to be aborted. +Finally the arity-3 fun can, when Type == ?CDB_SUB_PREPARE, return an error either as `{error, binary()}` or as `{error, #confd_error{}}` (\{error, tuple()\} is only for internal ConfD/NCS use). This will cause the commit of the current transaction to be aborted. CDB is locked for writing while config subscriptions are delivered. -When wait/3 returns \{error, timeout\} the connection (and its subscriptions) is still active and the application needs to call wait/3 again. But if wait/3 returns ok or \{error, Reason\} the connection to ConfD is closed and all subscription points associated with it are cleared. +When wait/3 returns `{error, timeout}` the connection (and its subscriptions) is still active and the application needs to call wait/3 again. But if wait/3 returns `ok` or `{error, Reason}` the connection to ConfD is closed and all subscription points associated with it are cleared. ### wait_start/1 diff --git a/developer-reference/erlang/econfd_notif.md b/developer-reference/erlang/econfd_notif.md index 6a992346..e54f6754 100644 --- a/developer-reference/erlang/econfd_notif.md +++ b/developer-reference/erlang/econfd_notif.md @@ -200,7 +200,7 @@ Wait for an event notification message and return corresponding record depending The logno element in the record is an integer. These integers can be used as an index to the function `econfd_logsyms:get_logsym/1` in order to get a textual description for the event. -When recv/2 returns \{error, timeout\} the connection (and its event subscriptions) is still active and the application needs to call recv/2 again. But if recv/2 returns \{error, Reason\} the connection to ConfD is closed and all event subscriptions associated with it are cleared. +When recv/2 returns `{error, timeout}` the connection (and its event subscriptions) is still active and the application needs to call recv/2 again. But if recv/2 returns `{error, Reason}` the connection to ConfD is closed and all event subscriptions associated with it are cleared. ### unpack_ha_node/1 diff --git a/developer-reference/pyapi/README.md b/developer-reference/pyapi/README.md index a1e3cf22..e8b4a894 100644 --- a/developer-reference/pyapi/README.md +++ b/developer-reference/pyapi/README.md @@ -1,28 +1,28 @@ --- icon: square-p --- - # Python API Reference Documentation for Python modules, generated from module source: -* [ncs](ncs.md): NCS Python high level module. -* [ncs.alarm](ncs.alarm.md): NCS Alarm Manager module. -* [ncs.application](ncs.application.md): Module for building NCS applications. -* [ncs.cdb](ncs.cdb.md): CDB high level module. -* [ncs.dp](ncs.dp.md): Callback module for connecting data providers to ConfD/NCS. -* [ncs.experimental](ncs.experimental.md): Experimental stuff. -* [ncs.log](ncs.log.md): This module provides some logging utilities. -* [ncs.maagic](ncs.maagic.md): Confd/NCS data access module. -* [ncs.maapi](ncs.maapi.md): MAAPI high level module. -* [ncs.progress](ncs.progress.md): MAAPI progress trace high level module. -* [ncs.service\_log](ncs.service_log.md): This module provides service logging -* [ncs.template](ncs.template.md): This module implements classes to simplify template processing. -* [ncs.util](ncs.util.md): Utility module, low level abstrations -* [\_ncs](_ncs.md): NCS Python low level module. -* [\_ncs.cdb](_ncs.cdb.md): Low level module for connecting to NCS built-in XML database (CDB). -* [\_ncs.dp](_ncs.dp.md): Low level callback module for connecting data providers to NCS. -* [\_ncs.error](_ncs.error.md): This module defines new NCS Python API exception classes. -* [\_ncs.events](_ncs.events.md): Low level module for subscribing to NCS event notifications. -* [\_ncs.ha](_ncs.ha.md): Low level module for connecting to NCS HA subsystem. -* [\_ncs.maapi](_ncs.maapi.md): Low level module for connecting to NCS with a read/write interface inside transactions. +- [ncs](ncs.md): NCS Python high level module. +- [ncs.alarm](ncs.alarm.md): NCS Alarm Manager module. +- [ncs.application](ncs.application.md): Module for building NCS applications. +- [ncs.cdb](ncs.cdb.md): CDB high level module. +- [ncs.dp](ncs.dp.md): Callback module for connecting data providers to ConfD/NCS. +- [ncs.experimental](ncs.experimental.md): Experimental stuff. +- [ncs.log](ncs.log.md): This module provides some logging utilities. +- [ncs.maagic](ncs.maagic.md): Confd/NCS data access module. +- [ncs.maapi](ncs.maapi.md): MAAPI high level module. +- [ncs.progress](ncs.progress.md): MAAPI progress trace high level module. +- [ncs.service_log](ncs.service_log.md): This module provides service logging +- [ncs.template](ncs.template.md): This module implements classes to simplify template processing. +- [ncs.util](ncs.util.md): Utility module, low level abstrations +- [_ncs](_ncs.md): NCS Python low level module. +- [_ncs.cdb](_ncs.cdb.md): Low level module for connecting to NCS built-in XML database (CDB). +- [_ncs.dp](_ncs.dp.md): Low level callback module for connecting data providers to NCS. +- [_ncs.error](_ncs.error.md): This module defines new NCS Python API exception classes. +- [_ncs.events](_ncs.events.md): Low level module for subscribing to NCS event notifications. +- [_ncs.ha](_ncs.ha.md): Low level module for connecting to NCS HA subsystem. +- [_ncs.maapi](_ncs.maapi.md): Low level module for connecting to NCS with a read/write interface +inside transactions. diff --git a/developer-reference/pyapi/_ncs.cdb.md b/developer-reference/pyapi/_ncs.cdb.md index 0da7eae1..0070e26f 100644 --- a/developer-reference/pyapi/_ncs.cdb.md +++ b/developer-reference/pyapi/_ncs.cdb.md @@ -1,14 +1,22 @@ -# \_ncs.cdb Module +# Python _ncs.cdb Module Low level module for connecting to NCS built-in XML database (CDB). -This module is used to connect to the NCS built-in XML database, CDB. The purpose of this API is to provide a read and subscription API to CDB. +This module is used to connect to the NCS built-in XML database, CDB. +The purpose of this API is to provide a read and subscription API to CDB. -CDB owns and stores the configuration data and the user of the API wants to read that configuration data and also get notified when someone through either NETCONF, SNMP, the CLI, the Web UI or the MAAPI modifies the data so that the application can re-read the configuration data and act accordingly. +CDB owns and stores the configuration data and the user of the API wants +to read that configuration data and also get notified when someone through +either NETCONF, SNMP, the CLI, the Web UI or the MAAPI modifies the data +so that the application can re-read the configuration data and act +accordingly. -CDB can also store operational data, i.e. data which is designated with a "config false" statement in the YANG data model. Operational data can be both read and written by the applications, but NETCONF and the other northbound agents can only read the operational data. +CDB can also store operational data, i.e. data which is designated with a +"config false" statement in the YANG data model. Operational data can be +both read and written by the applications, but NETCONF and the other +northbound agents can only read the operational data. -This documentation should be read together with the [confd\_lib\_cdb(3)](../../resources/man/confd_lib_cdb.3.md) man page. +This documentation should be read together with the [confd_lib_cdb(3)](../../resources/man/confd_lib_cdb.3.md) man page. ## Functions @@ -18,7 +26,8 @@ This documentation should be read together with the [confd\_lib\_cdb(3)](../../r cd(sock, path) -> None ``` -Changes the working directory according to the format path. Note that this function can not be used as an existence test. +Changes the working directory according to the format path. Note that +this function can not be used as an existence test. Keyword arguments: @@ -31,7 +40,8 @@ Keyword arguments: close(sock) -> None ``` -Closes the socket. end\_session() should be called before calling this function. +Closes the socket. end_session() should be called before calling this +function. Keyword arguments: @@ -43,32 +53,39 @@ Keyword arguments: connect(sock, type, ip, port, path) -> None ``` -The application has to connect to NCS before it can interact. There are two different types of connections identified by the type argument - DATA\_SOCKET and SUBSCRIPTION\_SOCKET. +The application has to connect to NCS before it can interact. There are two +different types of connections identified by the type argument - +DATA_SOCKET and SUBSCRIPTION_SOCKET. Keyword arguments: * sock -- a Python socket instance -* type -- DATA\_SOCKET or SUBSCRIPTION\_SOCKET -* ip -- the ip address if socket is AF\_INET (optional) -* port -- the port if socket is AF\_INET (optional) -* path -- a filename if socket is AF\_UNIX (optional). +* type -- DATA_SOCKET or SUBSCRIPTION_SOCKET +* ip -- the ip address if socket is AF_INET (optional) +* port -- the port if socket is AF_INET (optional) +* path -- a filename if socket is AF_UNIX (optional). -### connect\_name +### connect_name ```python connect_name(sock, type, name, ip, port, path) -> None ``` -When we use connect() to create a connection to NCS/CDB, the name argument passed to the library initialization function confd\_init() (see [confd\_lib\_lib(3)](../../resources/man/confd_lib_lib.3.md)) is used to identify the connection in status reports and logs. I we want different names to be used for different connections from the same application process, we can use connect\_name() with the wanted name instead of connect(). +When we use connect() to create a connection to NCS/CDB, the name +argument passed to the library initialization function confd_init() (see +[confd_lib_lib(3)](../../resources/man/confd_lib_lib.3.md)) is used to identify the connection in status reports and +logs. I we want different names to be used for different connections from +the same application process, we can use connect_name() with the wanted +name instead of connect(). Keyword arguments: * sock -- a Python socket instance -* type -- DATA\_SOCKET or SUBSCRIPTION\_SOCKET +* type -- DATA_SOCKET or SUBSCRIPTION_SOCKET * name -- the name -* ip -- the ip address if socket is AF\_INET (optional) -* port -- the port if socket is AF\_INET (optional) -* path -- a filename if socket is AF\_UNIX (optional). +* ip -- the ip address if socket is AF_INET (optional) +* port -- the port if socket is AF_INET (optional) +* path -- a filename if socket is AF_UNIX (optional). ### create @@ -76,14 +93,19 @@ Keyword arguments: create(sock, path) -> None ``` -Create a new list entry, presence container, or leaf of type empty (unless in a union, if type empty is in a union use set\_elem instead). Note that for list entries and containers, sub-elements will not exist until created or set via some of the other functions, thus doing implicit create via set\_object() or set\_values() may be preferred in this case. +Create a new list entry, presence container, or leaf +of type empty (unless in a union, if type empty is in a union +use set_elem instead). Note +that for list entries and containers, sub-elements will not exist until +created or set via some of the other functions, thus doing implicit +create via set_object() or set_values() may be preferred in this case. Keyword arguments: * sock -- a previously connected CDB socket * path -- item to create (string) -### cs\_node\_cd +### cs_node_cd ```python cs_node_cd(socket, path) -> Union[_ncs.CsNode, None] @@ -91,7 +113,9 @@ cs_node_cd(socket, path) -> Union[_ncs.CsNode, None] Utility function which finds the resulting CsNode given a string keypath. -Does the same thing as \_ncs.cs\_node\_cd(), but can handle paths that are ambiguous due to traversing a mount point, by sending a request to the daemon +Does the same thing as _ncs.cs_node_cd(), but can handle paths that are +ambiguous due to traversing a mount point, by sending a request to the +daemon Keyword arguments: @@ -104,22 +128,26 @@ Keyword arguments: delete(sock, path) -> None ``` -Delete a list entry, presence container, or leaf of type empty, and all its child elements (if any). +Delete a list entry, presence container, or leaf of type empty, and all +its child elements (if any). Keyword arguments: * sock -- a previously connected CDB socket * path -- item to delete (string) -### diff\_iterate +### diff_iterate ```python diff_iterate(sock, subid, iter, flags, initstate) -> int ``` -After reading the subscription socket the diff\_iterate() function can be used to iterate over the changes made in CDB data that matched the particular subscription point given by subid. +After reading the subscription socket the diff_iterate() function can be +used to iterate over the changes made in CDB data that matched the +particular subscription point given by subid. -The user defined function iter() will be called for each element that has been modified and matches the subscription. +The user defined function iter() will be called for each element that has +been modified and matches the subscription. This function will return the last return value from the iter() callback. @@ -130,11 +158,11 @@ Keyword arguments: * iter -- iterator function (see below) * initstate -- opaque passed to iter function -The user defined function iter() will be called for each element that has been modified and matches the subscription. It must have the following signature: +The user defined function iter() will be called for each element that has +been modified and matches the subscription. It must have the following +signature: -``` -iter_fn(kp, op, oldv, newv, state) -> int -``` + iter_fn(kp, op, oldv, newv, state) -> int Where arguments are: @@ -144,13 +172,19 @@ Where arguments are: * newv - the new value or None * state - the initstate object -### diff\_iterate\_resume +### diff_iterate_resume ```python diff_iterate_resume(sock, reply, iter, resumestate) -> int ``` -The application must call this function whenever an iterator function has returned ITER\_SUSPEND to finish up the iteration. If the application does not wish to continue iteration it must at least call diff\_iterate\_resume(sock, ITER\_STOP, None, None) to clean up the state. The reply parameter is what the iterator function would have returned (i.e. normally ITER\_RECURSE or ITER\_CONTINUE) if it hadn't returned ITER\_SUSPEND. +The application must call this function whenever an iterator function has +returned ITER_SUSPEND to finish up the iteration. If the application does +not wish to continue iteration it must at least call +diff_iterate_resume(sock, ITER_STOP, None, None) to clean up the state. +The reply parameter is what the iterator function would have returned +(i.e. normally ITER_RECURSE or ITER_CONTINUE) if it hadn't returned +ITER_SUSPEND. This function will return the last return value from the iter() callback. @@ -158,16 +192,19 @@ Keyword arguments: * sock -- a previously connected CDB socket * reply -- the reply value -* iter -- iterator function (see diff\_iterate()) +* iter -- iterator function (see diff_iterate()) * resumestate -- opaque passed to iter function -### end\_session +### end_session ```python end_session(sock) -> None ``` -We use connect() to establish a read socket to CDB. When the socket is closed, the read session is ended. We can reuse the same socket for another read session, but we must then end the session and create another session using start\_session(). +We use connect() to establish a read socket to CDB. When the socket is +closed, the read session is ended. We can reuse the same socket for another +read session, but we must then end the session and create another session +using start_session(). Keyword arguments: @@ -179,7 +216,9 @@ Keyword arguments: exists(sock, path) -> bool ``` -Leafs in the data model may be optional, and presence containers and list entries may or may not exist. This function checks whether a node exists in CDB. +Leafs in the data model may be optional, and presence containers and list +entries may or may not exist. This function checks whether a node exists +in CDB. Keyword arguments: @@ -192,20 +231,23 @@ Keyword arguments: get(sock, path) -> _ncs.Value ``` -This reads a a value from the path and returns the result. The path must lead to a leaf element in the XML data tree. +This reads a a value from the path and returns the result. The path must +lead to a leaf element in the XML data tree. Keyword arguments: * sock -- a previously connected CDB socket * path -- path to leaf -### get\_case +### get_case ```python get_case(sock, choice, path) -> None ``` -When we use the YANG choice statement in the data model, this function can be used to find the currently selected case, avoiding useless get() etc requests for elements that belong to other cases. +When we use the YANG choice statement in the data model, this function +can be used to find the currently selected case, avoiding useless +get() etc requests for elements that belong to other cases. Keyword arguments: @@ -213,7 +255,7 @@ Keyword arguments: * choice -- the choice (string) * path -- path to container or list entry where choice is defined (string) -### get\_compaction\_info +### get_compaction_info ```python get_compaction_info(sock, dbfile) -> dict @@ -223,29 +265,32 @@ Returns the compaction information on the given CDB file. The return value is a dict of the form: -``` -{ - 'fsize_previous': fsize_previous, - 'fsize_current': fsize_current, - 'last_time': last_time, - 'ntrans': ntrans -} -``` + { + 'fsize_previous': fsize_previous, + 'fsize_current': fsize_current, + 'last_time': last_time, + 'ntrans': ntrans + } In this dict all values are integers. Keyword arguments: * sock -- a previously connected CDB socket -* dbfile -- A\_CDB, O\_CDB or S\_CDB. +* dbfile -- A_CDB, O_CDB or S_CDB. -### get\_modifications +### get_modifications ```python get_modifications(sock, subid, flags, path) -> list ``` -The get\_modifications() function can be called after reception of a subscription notification to retrieve all the changes that caused the subscription notification. The socket sock is the subscription socket. The subscription id must also be provided. Optionally a path can be used to limit what is returned further (only changes below the supplied path will be returned), if this isn't needed path can be set to None. +The get_modifications() function can be called after reception of a +subscription notification to retrieve all the changes that caused the +subscription notification. The socket sock is the subscription socket. The +subscription id must also be provided. Optionally a path can be used to +limit what is returned further (only changes below the supplied path will +be returned), if this isn't needed path can be set to None. Keyword arguments: @@ -254,13 +299,16 @@ Keyword arguments: * flags -- the flags * path -- a path in string format or None -### get\_modifications\_cli +### get_modifications_cli ```python get_modifications_cli(sock, subid, flags) -> str ``` -The get\_modifications\_cli() function can be called after reception of a subscription notification to retrieve all the changes that caused the subscription notification as a string in Cisco CLI format. The socket sock is the subscription socket. The subscription id must also be provided. +The get_modifications_cli() function can be called after reception of +a subscription notification to retrieve all the changes that caused the +subscription notification as a string in Cisco CLI format. The socket sock +is the subscription socket. The subscription id must also be provided. Keyword arguments: @@ -268,26 +316,31 @@ Keyword arguments: * subid -- subscription id * flags -- the flags -### get\_modifications\_iter +### get_modifications_iter ```python get_modifications_iter(sock, flags) -> list ``` -The get\_modifications\_iter() is basically a convenient short-hand of the get\_modifications() function intended to be used from within a iteration function started by diff\_iterate(). In this case no subscription id is needed, and the path is implicitly the current position in the iteration. +The get_modifications_iter() is basically a convenient short-hand of +the get_modifications() function intended to be used from within a +iteration function started by diff_iterate(). In this case no subscription +id is needed, and the path is implicitly the current position in the +iteration. Keyword arguments: * sock -- a previously connected CDB socket * flags -- the flags -### get\_object +### get_object ```python get_object(sock, n, path) -> list ``` -This function reads at most n values from the container or list entry specified by the path, and returns them as a list of Value's. +This function reads at most n values from the container or list entry +specified by the path, and returns them as a list of Value's. Keyword arguments: @@ -295,13 +348,18 @@ Keyword arguments: * n -- max number of values to read * path -- path to a list entry or a container (string) -### get\_objects +### get_objects ```python get_objects(sock, n, ix, nobj, path) -> list ``` -Similar to get\_object(), but reads multiple entries of a list based on the "instance integer" otherwise given within square brackets in the path - here the path must specify the list without the instance integer. At most n values from each of nobj entries, starting at entry ix, are read and placed in the values array. The return value is a list of objects where each object is represented as a list of Values. +Similar to get_object(), but reads multiple entries of a list based +on the "instance integer" otherwise given within square brackets in the +path - here the path must specify the list without the instance integer. +At most n values from each of nobj entries, starting at entry ix, are +read and placed in the values array. The return value is a list of objects +where each object is represented as a list of Values. Keyword arguments: @@ -311,102 +369,128 @@ Keyword arguments: * nobj -- number of objects to read * path -- path to a list entry or a container (string) -### get\_phase +### get_phase ```python get_phase(sock) -> dict ``` -Returns the start-phase that CDB is currently in. The return value is a dict of the form: +Returns the start-phase that CDB is currently in. The return value is a +dict of the form: -``` -{ - 'phase': phase, - 'flags': flags, - 'init': init, - 'upgrade': upgrade -} -``` + { + 'phase': phase, + 'flags': flags, + 'init': init, + 'upgrade': upgrade + } -In this dict 'phase' and 'flags' are integers, while 'init' and 'upgrade' are booleans. +In this dict 'phase' and 'flags' are integers, while 'init' and 'upgrade' +are booleans. Keyword arguments: * sock -- a previously connected CDB socket -### get\_replay\_txids +### get_replay_txids ```python get_replay_txids(sock) -> List[Tuple] ``` -When the subscriptionReplay functionality is enabled in confd.conf this function returns the list of available transactions that CDB can replay. The current transaction id will be the first in the list, the second at txid\[1] and so on. In case there are no replay transactions available (the feature isn't enabled or there hasn't been any transactions yet) only one (the current) transaction id is returned. +When the subscriptionReplay functionality is enabled in confd.conf this +function returns the list of available transactions that CDB can replay. +The current transaction id will be the first in the list, the second at +txid[1] and so on. In case there are no replay transactions available (the +feature isn't enabled or there hasn't been any transactions yet) only one +(the current) transaction id is returned. -The returned list contains tuples with the form (s1, s2, s3, primary) where s1, s2 and s3 are unsigned integers and primary is either a string or None. +The returned list contains tuples with the form (s1, s2, s3, primary) where +s1, s2 and s3 are unsigned integers and primary is either a string or None. Keyword arguments: * sock -- a previously connected CDB socket -### get\_transaction\_handle +### get_transaction_handle ```python get_transaction_handle(sock) -> int ``` -Returns the transaction handle for the transaction that triggered the current subscription notification. This function uses a subscription socket, and can only be called when a subscription notification for configuration data has been received on that socket, before sync\_subscription\_socket() has been called. Additionally, it is not possible to call this function from the iter() function passed to diff\_iterate(). +Returns the transaction handle for the transaction that triggered the +current subscription notification. This function uses a subscription +socket, and can only be called when a subscription notification for +configuration data has been received on that socket, before +sync_subscription_socket() has been called. Additionally, it is not +possible to call this function from the iter() function passed to +diff_iterate(). Note: - -> A CDB client is not expected to access the ConfD transaction store directly - this function should only be used for logging or debugging purposes. +> A CDB client is not expected to access the ConfD transaction store +> directly - this function should only be used for logging or debugging +> purposes. Note: - -> When the ConfD High Availability functionality is used, the transaction information is not available on secondary nodes. +> When the ConfD High Availability functionality is used, the +> transaction information is not available on secondary nodes. Keyword arguments: * sock -- a previously connected CDB socket -### get\_txid +### get_txid ```python get_txid(sock) -> tuple ``` -Read the last transaction id from CDB. This function can be used if we are forced to reconnect to CDB. If the transaction id we read is identical to the last id we had prior to loosing the CDB sockets we don't have to reload our managed object data. See the User Guide for full explanation. +Read the last transaction id from CDB. This function can be used if we are +forced to reconnect to CDB. If the transaction id we read is identical to +the last id we had prior to loosing the CDB sockets we don't have to reload +our managed object data. See the User Guide for full explanation. -The returned tuple has the form (s1, s2, s3, primary) where s1, s2 and s3 are unsigned integers and primary is either a string or None. +The returned tuple has the form (s1, s2, s3, primary) where s1, s2 and s3 +are unsigned integers and primary is either a string or None. Keyword arguments: * sock -- a previously connected CDB socket -### get\_user\_session +### get_user_session ```python get_user_session(sock) -> int ``` -Returns the user session id for the transaction that triggered the current subscription notification. This function uses a subscription socket, and can only be called when a subscription notification for configuration data has been received on that socket, before sync\_subscription\_socket() has been called. Additionally, it is not possible to call this function from the iter() function passed to diff\_iterate(). To retrieve full information about the user session, use \_maapi.get\_user\_session() (see [confd\_lib\_maapi(3)](../../resources/man/confd_lib_maapi.3.md)). +Returns the user session id for the transaction that triggered the +current subscription notification. This function uses a subscription +socket, and can only be called when a subscription notification for +configuration data has been received on that socket, before +sync_subscription_socket() has been called. Additionally, it is not +possible to call this function from the iter() function passed to +diff_iterate(). To retrieve full information about the user session, +use _maapi.get_user_session() (see [confd_lib_maapi(3)](../../resources/man/confd_lib_maapi.3.md)). Note: - -> When the ConfD High Availability functionality is used, the user session information is not available on secondary nodes. +> When the ConfD High Availability functionality is used, the +> user session information is not available on secondary nodes. Keyword arguments: * sock -- a previously connected CDB socket -### get\_values +### get_values ```python get_values(sock, values, path) -> list ``` -Read an arbitrary set of sub-elements of a container or list entry. The values list must be pre-populated with a number of TagValue instances. +Read an arbitrary set of sub-elements of a container or list entry. The +values list must be pre-populated with a number of TagValue instances. -TagValues passed in the values list will be updated with the corresponding values read and a new values list will be returned. +TagValues passed in the values list will be updated with the corresponding +values read and a new values list will be returned. Keyword arguments: @@ -420,19 +504,24 @@ Keyword arguments: getcwd(sock) -> str ``` -Returns the current position as previously set by cd(), pushd(), or popd() as a string path. Note that what is returned is a pretty-printed version of the internal representation of the current position. It will be the shortest unique way to print the path but it might not exactly match the string given to cd(). +Returns the current position as previously set by cd(), pushd(), or popd() +as a string path. Note that what is returned is a pretty-printed version of +the internal representation of the current position. It will be the shortest +unique way to print the path but it might not exactly match the string given +to cd(). Keyword arguments: * sock -- a previously connected CDB socket -### getcwd\_kpath +### getcwd_kpath ```python getcwd_kpath(sock) -> _ncs.HKeypathRef ``` -Returns the current position like getcwd(), but as a HKeypathRef instead of as a string. +Returns the current position like getcwd(), but as a HKeypathRef +instead of as a string. Keyword arguments: @@ -451,71 +540,87 @@ Keyword arguments: * sock -- a previously connected CDB socket * path -- path to list entry -### initiate\_journal\_compaction +### initiate_journal_compaction ```python initiate_journal_compaction(sock) -> None ``` -Normally CDB handles journal compaction of the config datastore automatically. If this has been turned off (in the configuration file) then the A.cdb file will grow indefinitely unless this API function is called periodically to initiate compaction. This function initiates a compaction and returns immediately (if the datastore is locked, the compaction will be delayed, but eventually compaction will take place). Calling this function when journal compaction is configured to be automatic has no effect. +Normally CDB handles journal compaction of the config datastore +automatically. If this has been turned off (in the configuration file) +then the A.cdb file will grow indefinitely unless this API function is +called periodically to initiate compaction. This function initiates a +compaction and returns immediately (if the datastore is locked, the +compaction will be delayed, but eventually compaction will take place). +Calling this function when journal compaction is configured to be automatic +has no effect. Keyword arguments: * sock -- a previously connected CDB socket -### initiate\_journal\_dbfile\_compaction +### initiate_journal_dbfile_compaction ```python initiate_journal_dbfile_compaction(sock, dbfile) -> None ``` -Similar to initiate\_journal\_compaction() but initiates the compaction on the given CDB file instead of all CDB files. +Similar to initiate_journal_compaction() but initiates the compaction +on the given CDB file instead of all CDB files. Keyword arguments: * sock -- a previously connected CDB socket -* dbfile -- A\_CDB, O\_CDB or S\_CDB. +* dbfile -- A_CDB, O_CDB or S_CDB. -### is\_default +### is_default ```python is_default(sock, path) -> bool ``` -This function returns True for a leaf which has a default value defined in the data model when no value has been set, i.e. when the default value is in effect. It returns False for other existing leafs. There is normally no need to call this function, since CDB automatically provides the default value as needed when get() etc is called. +This function returns True for a leaf which has a default value defined in +the data model when no value has been set, i.e. when the default value is +in effect. It returns False for other existing leafs. +There is normally no need to call this function, since CDB automatically +provides the default value as needed when get() etc is called. Keyword arguments: * sock -- a previously connected CDB socket * path -- path to leaf -### mandatory\_subscriber +### mandatory_subscriber ```python mandatory_subscriber(sock, name) -> None ``` -Attaches a mandatory attribute and a mandatory name to the subscriber identified by sock. The name argument is distinct from the name argument in connect\_name(). +Attaches a mandatory attribute and a mandatory name to the subscriber +identified by sock. The name argument is distinct from the name argument +in connect_name(). Keyword arguments: * sock -- a previously connected CDB socket * name -- the name -### next\_index +### next_index ```python next_index(sock, path) -> int ``` -Given a path to a list entry next\_index() returns the position (starting from 0) of the next entry (regardless of whether the path exists or not). +Given a path to a list entry next_index() returns the position +(starting from 0) of the next entry (regardless of whether the path +exists or not). Keyword arguments: * sock -- a previously connected CDB socket * path -- path to list entry -### num\_instances +### num_instances ```python num_instances(sock, path) -> int @@ -528,13 +633,16 @@ Keyword arguments: * sock -- a previously connected CDB socket * path -- path to list node -### oper\_subscribe +### oper_subscribe ```python oper_subscribe(sock, nspace, path) -> int ``` -Sets up a CDB subscription for changes in the operational database. Similar to the subscriptions for configuration data, we can be notified of changes to the operational data stored in CDB. Note that there are several differences from the subscriptions for configuration data. +Sets up a CDB subscription for changes in the operational database. +Similar to the subscriptions for configuration data, we can be notified +of changes to the operational data stored in CDB. Note that there are +several differences from the subscriptions for configuration data. Keyword arguments: @@ -548,7 +656,8 @@ Keyword arguments: popd(sock) -> None ``` -Pops the top element from the directory stack and changes directory to previous directory. +Pops the top element from the directory stack and changes directory to +previous directory. Keyword arguments: @@ -567,51 +676,59 @@ Keyword arguments: * sock -- a previously connected CDB socket * path -- path to cd to -### read\_subscription\_socket +### read_subscription_socket ```python read_subscription_socket(sock) -> list ``` -This call will return a list of integer values containing subscription points earlier acquired through calls to subscribe(). +This call will return a list of integer values containing subscription +points earlier acquired through calls to subscribe(). Keyword arguments: * sock -- a previously connected CDB socket -### read\_subscription\_socket2 +### read_subscription_socket2 ```python read_subscription_socket2(sock) -> tuple ``` -Another version of read\_subscription\_socket() which will return a 3-tuple in the form (type, flags, subpoints). +Another version of read_subscription_socket() which will return a 3-tuple +in the form (type, flags, subpoints). Keyword arguments: * sock -- a previously connected CDB socket -### replay\_subscriptions +### replay_subscriptions ```python replay_subscriptions(sock, txid, sub_points) -> None ``` -This function makes it possible to replay the subscription events for the last configuration change to some or all CDB subscribers. This call is useful in a number of recovery scenarios, where some CDB subscribers lost connection to ConfD before having received all the changes in a transaction. The replay functionality is only available if it has been enabled in confd.conf. +This function makes it possible to replay the subscription events for the +last configuration change to some or all CDB subscribers. This call is +useful in a number of recovery scenarios, where some CDB subscribers lost +connection to ConfD before having received all the changes in a +transaction. The replay functionality is only available if it has been +enabled in confd.conf. Keyword arguments: * sock -- a previously connected CDB socket * txid -- a 4-tuple of the form (s1, s2, s3, primary) -* sub\_points -- a list of subscription points +* sub_points -- a list of subscription points -### set\_case +### set_case ```python set_case(sock, choice, scase, path) -> None ``` -When we use the YANG choice statement in the data model, this function can be used to select the current case. +When we use the YANG choice statement in the data model, this function +can be used to select the current case. Keyword arguments: @@ -620,13 +737,14 @@ Keyword arguments: * scase -- the case (string) * path -- path to container or list entry where choice is defined (string) -### set\_elem +### set_elem ```python set_elem(sock, value, path) -> None ``` -Set the value of a single leaf. The value may be either a Value instance or a string. +Set the value of a single leaf. The value may be either a Value instance or +a string. Keyword arguments: @@ -634,26 +752,30 @@ Keyword arguments: * value -- the value to set * path -- a string pointing to a single leaf -### set\_namespace +### set_namespace ```python set_namespace(sock, hashed_ns) -> None ``` -If we want to access data in CDB where the toplevel element name is not unique, we need to set the namespace. We are reading data related to a specific .fxs file. confdc can be used to generate a .py file with a class for the namespace, by the flag --emit-python to confdc (see confdc(1)). +If we want to access data in CDB where the toplevel element name is not +unique, we need to set the namespace. We are reading data related to a +specific .fxs file. confdc can be used to generate a .py file with a class +for the namespace, by the flag --emit-python to confdc (see confdc(1)). Keyword arguments: * sock -- a previously connected CDB socket -* hashed\_ns -- the namespace hash +* hashed_ns -- the namespace hash -### set\_object +### set_object ```python set_object(sock, values, path) -> None ``` -Set all elements corresponding to the complete contents of a container or list entry, except for sub-lists. +Set all elements corresponding to the complete contents of a container or +list entry, except for sub-lists. Keyword arguments: @@ -661,20 +783,25 @@ Keyword arguments: * values -- a list of Value:s * path -- path to container or list entry (string) -### set\_timeout +### set_timeout ```python set_timeout(sock, timeout_secs) -> None ``` -A timeout for client actions can be specified via /confdConfig/cdb/clientTimeout in confd.conf, see the confd.conf(5) manual page. This function can be used to dynamically extend (or shorten) the timeout for the current action. Thus it is possible to configure a restrictive timeout in confd.conf, but still allow specific actions to have a longer execution time. +A timeout for client actions can be specified via +/confdConfig/cdb/clientTimeout in confd.conf, see the confd.conf(5) +manual page. This function can be used to dynamically extend (or shorten) +the timeout for the current action. Thus it is possible to configure a +restrictive timeout in confd.conf, but still allow specific actions to +have a longer execution time. Keyword arguments: * sock -- a previously connected CDB socket -* timeout\_secs -- timeout in seconds +* timeout_secs -- timeout in seconds -### set\_values +### set_values ```python set_values(sock, values, path) -> None @@ -688,26 +815,32 @@ Keyword arguments: * values -- a list of TagValue:s * path -- path to container or list entry (string) -### start\_session +### start_session ```python start_session(sock, db) -> None ``` -Starts a new session on an already established socket to CDB. The db parameter should be one of RUNNING, PRE\_COMMIT\_RUNNING, STARTUP and OPERATIONAL. +Starts a new session on an already established socket to CDB. The db +parameter should be one of RUNNING, PRE_COMMIT_RUNNING, STARTUP and +OPERATIONAL. Keyword arguments: * sock -- a previously connected CDB socket * db -- the database -### start\_session2 +### start_session2 ```python start_session2(sock, db, flags) -> None ``` -This function may be used instead of start\_session() if it is considered necessary to have more detailed control over some aspects of the CDB session - if in doubt, use start\_session() instead. The sock and db arguments are the same as for start\_session(), and these values can be used for flags (ORed together if more than one). +This function may be used instead of start_session() if it is considered +necessary to have more detailed control over some aspects of the CDB +session - if in doubt, use start_session() instead. The sock and db +arguments are the same as for start_session(), and these values can be used +for flags (ORed together if more than one). Keyword arguments: @@ -715,46 +848,54 @@ Keyword arguments: * db -- the database * flags -- the flags -### sub\_abort\_trans +### sub_abort_trans ```python sub_abort_trans(sock, code, apptag_ns, apptag_tag, reason) -> None ``` -This function is to be called instead of sync\_subscription\_socket() when the subscriber wishes to abort the current transaction. It is only valid to call after read\_subscription\_socket2() has returned with type set to CDB\_SUB\_PREPARE. The arguments after sock are the same as to X\_seterr\_extended() and give the caller a way of indicating the reason for the failure. +This function is to be called instead of sync_subscription_socket() +when the subscriber wishes to abort the current transaction. It is only +valid to call after read_subscription_socket2() has returned with +type set to CDB_SUB_PREPARE. The arguments after sock are the same as to +X_seterr_extended() and give the caller a way of indicating the +reason for the failure. Keyword arguments: * sock -- a previously connected CDB socket * code -- the error code -* apptag\_ns -- the namespace hash -* apptag\_tag -- the tag hash +* apptag_ns -- the namespace hash +* apptag_tag -- the tag hash * reason -- reason string -### sub\_abort\_trans\_info +### sub_abort_trans_info ```python sub_abort_trans_info(sock, code, apptag_ns, apptag_tag, error_info, reason) -> None ``` -Same a sub\_abort\_trans() but also fills in the NETCONF element. +Same a sub_abort_trans() but also fills in the NETCONF element. Keyword arguments: * sock -- a previously connected CDB socket * code -- the error code -* apptag\_ns -- the namespace hash -* apptag\_tag -- the tag hash -* error\_info -- a list of TagValue instances +* apptag_ns -- the namespace hash +* apptag_tag -- the tag hash +* error_info -- a list of TagValue instances * reason -- reason string -### sub\_progress +### sub_progress ```python sub_progress(sock, msg) -> None ``` -After receiving a subscription notification (using read\_subscription\_socket()) but before acknowledging it (or aborting, in the case of prepare subscriptions), it is possible to send progress reports back to ConfD using the sub\_progress() function. +After receiving a subscription notification (using +read_subscription_socket()) but before acknowledging it (or aborting, +in the case of prepare subscriptions), it is possible to send progress +reports back to ConfD using the sub_progress() function. Keyword arguments: @@ -767,7 +908,11 @@ Keyword arguments: subscribe(sock, prio, nspace, path) -> int ``` -Sets up a CDB subscription so that we are notified when CDB configuration data changes. There can be multiple subscription points from different sources, that is a single client daemon can have many subscriptions and there can be many client daemons. The return value is a subscription point used to identify this particular subscription. +Sets up a CDB subscription so that we are notified when CDB configuration +data changes. There can be multiple subscription points from different +sources, that is a single client daemon can have many subscriptions and +there can be many client daemons. The return value is a subscription point +used to identify this particular subscription. Keyword arguments: @@ -782,7 +927,13 @@ Keyword arguments: subscribe2(sock, type, flags, prio, nspace, path) -> int ``` -This function supersedes the current subscribe() and oper\_subscribe() as well as makes it possible to use the new two phase subscription method. Operational and configuration subscriptions can be done on the same socket, but in that case the notifications may be arbitrarily interleaved, including operational notifications arriving between different configuration notifications for the same transaction. If this is a problem, use separate sockets for operational and configuration subscriptions. +This function supersedes the current subscribe() and oper_subscribe() as +well as makes it possible to use the new two phase subscription method. +Operational and configuration subscriptions can be done on the same +socket, but in that case the notifications may be arbitrarily interleaved, +including operational notifications arriving between different configuration +notifications for the same transaction. If this is a problem, use separate +sockets for operational and configuration subscriptions. Keyword arguments: @@ -793,70 +944,90 @@ Keyword arguments: * nspace -- the namespace hash * path -- path to node -### subscribe\_done +### subscribe_done ```python subscribe_done(sock) -> None ``` -When a client is done registering all its subscriptions on a particular subscription socket it must call subscribe\_done(). No notifications will be delivered until then. +When a client is done registering all its subscriptions on a particular +subscription socket it must call subscribe_done(). No notifications will be +delivered until then. Keyword arguments: * sock -- a previously connected CDB socket -### sync\_subscription\_socket +### sync_subscription_socket ```python sync_subscription_socket(sock, st) -> None ``` -Once we have read the subscription notification through a call to read\_subscription\_socket() and optionally used the diff\_iterate() to iterate through the changes as well as acted on the changes to CDB, we must synchronize with CDB so that CDB can continue and deliver further subscription messages to subscribers with higher priority numbers. +Once we have read the subscription notification through a call to +read_subscription_socket() and optionally used the diff_iterate() +to iterate through the changes as well as acted on the changes to CDB, we +must synchronize with CDB so that CDB can continue and deliver further +subscription messages to subscribers with higher priority numbers. Keyword arguments: * sock -- a previously connected CDB socket * st -- sync type (int) -### trigger\_oper\_subscriptions +### trigger_oper_subscriptions ```python trigger_oper_subscriptions(sock, sub_points, flags) -> None ``` -This function works like trigger\_subscriptions(), but for CDB subscriptions to operational data. The caller will trigger all subscription points passed in the sub\_points list (or all operational data subscribers if the list is empty), and the call will not return until the last subscriber has called sync\_subscription\_socket(). +This function works like trigger_subscriptions(), but for CDB +subscriptions to operational data. The caller will trigger all +subscription points passed in the sub_points list (or all operational +data subscribers if the list is empty), and the call will not return until +the last subscriber has called sync_subscription_socket(). Keyword arguments: * sock -- a previously connected CDB socket -* sub\_points -- a list of subscription points +* sub_points -- a list of subscription points * flags -- the flags -### trigger\_subscriptions +### trigger_subscriptions ```python trigger_subscriptions(sock, sub_points) -> None ``` -This function makes it possible to trigger CDB subscriptions for configuration data even though the configuration has not been modified. The caller will trigger all subscription points passed in the sub\_points list (or all subscribers if the list is empty) in priority order, and the call will not return until the last subscriber has called sync\_subscription\_socket(). +This function makes it possible to trigger CDB subscriptions for +configuration data even though the configuration has not been modified. +The caller will trigger all subscription points passed in the sub_points +list (or all subscribers if the list is empty) in priority order, and the +call will not return until the last subscriber has called +sync_subscription_socket(). Keyword arguments: * sock -- a previously connected CDB socket -* sub\_points -- a list of subscription points +* sub_points -- a list of subscription points -### wait\_start +### wait_start ```python wait_start(sock) -> None ``` -This call waits until CDB has completed start-phase 1 and is available, when it is CONFD\_OK is returned. If CDB already is available (i.e. start-phase >= 1) the call returns immediately. This can be used by a CDB client who is not synchronously started and only wants to wait until it can read its configuration. The call can be used after connect(). +This call waits until CDB has completed start-phase 1 and is available, +when it is CONFD_OK is returned. If CDB already is available (i.e. +start-phase >= 1) the call returns immediately. This can be used by a CDB +client who is not synchronously started and only wants to wait until it +can read its configuration. The call can be used after connect(). Keyword arguments: * sock -- a previously connected CDB socket + ## Predefined Values ```python diff --git a/developer-reference/pyapi/_ncs.dp.md b/developer-reference/pyapi/_ncs.dp.md index 4428cb63..b461a257 100644 --- a/developer-reference/pyapi/_ncs.dp.md +++ b/developer-reference/pyapi/_ncs.dp.md @@ -1,108 +1,128 @@ -# \_ncs.dp Module +# Python _ncs.dp Module Low level callback module for connecting data providers to NCS. -This module is used to connect to the NCS Data Provider API. The purpose of this API is to provide callback hooks so that user-written data providers can provide data stored externally to NCS. NCS needs this information in order to drive its northbound agents. +This module is used to connect to the NCS Data Provider +API. The purpose of this API is to provide callback hooks so that +user-written data providers can provide data stored externally to NCS. +NCS needs this information in order to drive its northbound agents. -The module is also used to populate items in the data model which are not data or configuration items, such as statistics items from the device. +The module is also used to populate items in the data model which are not +data or configuration items, such as statistics items from the device. -The module consists of a number of API functions whose purpose is to install different callback functions at different points in the data model tree which is the representation of the device configuration. Read more about callpoints in tailf\_yang\_extensions(5). Read more about how to use the module in the User Guide chapters on Operational data and External data. +The module consists of a number of API functions whose purpose is to +install different callback functions at different points in the data model +tree which is the representation of the device configuration. Read more +about callpoints in tailf_yang_extensions(5). Read more about how to use +the module in the User Guide chapters on Operational data and External +data. -This documentation should be read together with the [confd\_lib\_dp(3)](../../resources/man/confd_lib_dp.3.md) man page. +This documentation should be read together with the [confd_lib_dp(3)](../../resources/man/confd_lib_dp.3.md) man page. ## Functions -### aaa\_reload +### aaa_reload ```python aaa_reload(tctx) -> None ``` -When the ConfD AAA tree is populated by an external data provider (see the AAA chapter in the User Guide), this function can be used by the data provider to notify ConfD when there is a change to the AAA data. +When the ConfD AAA tree is populated by an external data provider (see the +AAA chapter in the User Guide), this function can be used by the data +provider to notify ConfD when there is a change to the AAA data. Keyword arguments: * tctx -- a transaction context -### access\_reply\_result +### access_reply_result ```python access_reply_result(actx, result) -> None ``` -The callbacks must call this function to report the result of the access check to ConfD, and should normally return CONFD\_OK. If any other value is returned, it will cause the access check to be rejected. +The callbacks must call this function to report the result of the access +check to ConfD, and should normally return CONFD_OK. If any other value is +returned, it will cause the access check to be rejected. Keyword arguments: * actx -- the authorization context -* result -- the result (ACCESS\_RESULT\_xxx) +* result -- the result (ACCESS_RESULT_xxx) -### action\_delayed\_reply\_error +### action_delayed_reply_error ```python action_delayed_reply_error(uinfo, errstr) -> None ``` -If we use the CONFD\_DELAYED\_RESPONSE as a return value from the action callback, we must later asynchronously reply. This function is used to reply with error. +If we use the CONFD_DELAYED_RESPONSE as a return value from the action +callback, we must later asynchronously reply. This function is used to +reply with error. Keyword arguments: * uinfo -- a user info context * errstr -- an error string -### action\_delayed\_reply\_ok +### action_delayed_reply_ok ```python action_delayed_reply_ok(uinfo) -> None ``` -If we use the CONFD\_DELAYED\_RESPONSE as a return value from the action callback, we must later asynchronously reply. This function is used to reply with success. +If we use the CONFD_DELAYED_RESPONSE as a return value from the action +callback, we must later asynchronously reply. This function is used to +reply with success. Keyword arguments: * uinfo -- a user info context -### action\_reply\_command +### action_reply_command ```python action_reply_command(uinfo, values) -> None ``` -If a CLI callback command should return data, it must invoke this function in response to the cb\_command() callback. +If a CLI callback command should return data, it must invoke this function +in response to the cb_command() callback. Keyword arguments: * uinfo -- a user info context * values -- a list of strings or None -### action\_reply\_completion +### action_reply_completion ```python action_reply_completion(uinfo, values) -> None ``` -This function must normally be called in response to the cb\_completion() callback. +This function must normally be called in response to the cb_completion() +callback. Keyword arguments: * uinfo -- a user info context * values -- a list of 3-tuples or None (see below) -The values argument must be None or a list of 3-tuples where each tuple is built up like: +The values argument must be None or a list of 3-tuples where each tuple is +built up like: -``` -(type::int, value::string, extra::string) -``` + (type::int, value::string, extra::string) The third item of the tuple (extra) may be set to None. -### action\_reply\_range\_enum +### action_reply_range_enum ```python action_reply_range_enum(uinfo, values, keysize) -> None ``` -This function must be called in response to the cb\_completion() callback when it is invoked via a tailf:cli-custom-range-enumerator statement in the data model. +This function must be called in response to the cb_completion() callback +when it is invoked via a tailf:cli-custom-range-enumerator statement in the +data model. Keyword arguments: @@ -110,15 +130,19 @@ Keyword arguments: * values -- a list of keys as strings or None * keysize -- number of keys for the list in the data model -The values argument is a flat list of keys. If the list in the data model specifies multiple keys this list is still flat. The keysize argument tells us how many keys to use for each list element. So the size of values should be a multiple of keysize. +The values argument is a flat list of keys. If the list in the data model +specifies multiple keys this list is still flat. The keysize argument +tells us how many keys to use for each list element. So the size of values +should be a multiple of keysize. -### action\_reply\_rewrite +### action_reply_rewrite ```python action_reply_rewrite(uinfo, values, unhides) -> None ``` -This function can be called instead of action\_reply\_command() as a response to a show path rewrite callback invocation. +This function can be called instead of action_reply_command() as a +response to a show path rewrite callback invocation. Keyword arguments: @@ -126,13 +150,14 @@ Keyword arguments: * values -- a list of strings or None * unhides -- a list of strings or None -### action\_reply\_rewrite2 +### action_reply_rewrite2 ```python action_reply_rewrite2(uinfo, values, unhides, selects) -> None ``` -This function can be called instead of action\_reply\_command() as a response to a show path rewrite callback invocation. +This function can be called instead of action_reply_command() as a +response to a show path rewrite callback invocation. Keyword arguments: @@ -141,104 +166,115 @@ Keyword arguments: * unhides -- a list of strings or None * selects -- a list of strings or None -### action\_reply\_values +### action_reply_values ```python action_reply_values(uinfo, values) -> None ``` -If the action definition specifies that the action should return data, it must invoke this function in response to the cb\_action() callback. +If the action definition specifies that the action should return data, it +must invoke this function in response to the cb_action() callback. Keyword arguments: * uinfo -- a user info context -* values -- a list of \_lib.TagValue instances or None +* values -- a list of _lib.TagValue instances or None -### action\_set\_fd +### action_set_fd ```python action_set_fd(uinfo, sock) -> None ``` -Associate a worker socket with the action. This function must be called in the action cb\_init() callback. +Associate a worker socket with the action. This function must be called in +the action cb_init() callback. Keyword arguments: * uinfo -- a user info context * sock -- a previously connected worker socket -A typical implementation of an action cb\_init() callback looks like: +A typical implementation of an action cb_init() callback looks like: -``` -class ActionCallbacks(object): - def __init__(self, workersock): - self.workersock = workersock + class ActionCallbacks(object): + def __init__(self, workersock): + self.workersock = workersock - def cb_init(self, uinfo): - dp.action_set_fd(uinfo, self.workersock) -``` + def cb_init(self, uinfo): + dp.action_set_fd(uinfo, self.workersock) -### action\_set\_timeout +### action_set_timeout ```python action_set_timeout(uinfo, timeout_secs) -> None ``` -Some action callbacks may require a significantly longer execution time than others, and this time may not even be possible to determine statically (e.g. a file download). In such cases the /confdConfig/capi/queryTimeout setting in confd.conf may be insufficient, and this function can be used to extend (or shorten) the timeout for the current callback invocation. The timeout is given in seconds from the point in time when the function is called. +Some action callbacks may require a significantly longer execution time +than others, and this time may not even be possible to determine statically +(e.g. a file download). In such cases the /confdConfig/capi/queryTimeout +setting in confd.conf may be insufficient, and this function can be used to +extend (or shorten) the timeout for the current callback invocation. The +timeout is given in seconds from the point in time when the function is +called. Keyword arguments: * uinfo -- a user info context -* timeout\_secs -- timeout value +* timeout_secs -- timeout value -### action\_seterr +### action_seterr ```python action_seterr(uinfo, errstr) -> None ``` -If action callback encounters fatal problems that can not be expressed via the reply function, it may call this function with an appropriate message and return CONFD\_ERR instead of CONFD\_OK. +If action callback encounters fatal problems that can not be expressed via +the reply function, it may call this function with an appropriate message +and return CONFD_ERR instead of CONFD_OK. Keyword arguments: * uinfo -- a user info context * errstr -- an error message string -### action\_seterr\_extended +### action_seterr_extended ```python action_seterr_extended(uninfo, code, apptag_ns, apptag_tag, errstr) -> None ``` -This function can be used to provide more structured error information from an action callback. +This function can be used to provide more structured error information +from an action callback. Keyword arguments: * uinfo -- a user info context * code -- an error code -* apptag\_ns -- namespace - should be set to 0 -* apptag\_tag -- either 0 or the hash value for a data model node +* apptag_ns -- namespace - should be set to 0 +* apptag_tag -- either 0 or the hash value for a data model node * errstr -- an error message string -### action\_seterr\_extended\_info +### action_seterr_extended_info ```python action_seterr_extended_info(uinfo, code, apptag_ns, apptag_tag, error_info, errstr) -> None ``` -This function can be used to provide structured error information in the same way as action\_seterr\_extended(), and additionally provide contents for the NETCONF element. +This function can be used to provide structured error information in the +same way as action_seterr_extended(), and additionally provide contents for +the NETCONF element. Keyword arguments: * uinfo -- a user info context * code -- an error code -* apptag\_ns -- namespace - should be set to 0 -* apptag\_tag -- either 0 or the hash value for a data model node -* error\_info -- a list of \_lib.TagValue instances +* apptag_ns -- namespace - should be set to 0 +* apptag_tag -- either 0 or the hash value for a data model node +* error_info -- a list of _lib.TagValue instances * errstr -- an error message string -### auth\_seterr +### auth_seterr ```python auth_seterr(actx, errstr) -> None @@ -246,25 +282,33 @@ auth_seterr(actx, errstr) -> None This function is used by the application to set an error string. -This function can be used to provide a text message when the callback returns CONFD\_ERR. If used when rejecting a successful authentication, the message will be logged in ConfD's audit log (otherwise a generic "rejected by application callback" message is logged). +This function can be used to provide a text message when the callback +returns CONFD_ERR. If used when rejecting a successful authentication, the +message will be logged in ConfD's audit log (otherwise a generic "rejected +by application callback" message is logged). Keyword arguments: * actx -- the auth context * errstr -- an error message string -### authorization\_set\_timeout +### authorization_set_timeout ```python authorization_set_timeout(actx, timeout_secs) -> None ``` -The authorization callbacks are invoked on the daemon control socket, and as such are expected to complete quickly. However in case they send requests to a remote server, and such a request needs to be retried, this function can be used to extend the timeout for the current callback invocation. The timeout is given in seconds from the point in time when the function is called. +The authorization callbacks are invoked on the daemon control socket, and +as such are expected to complete quickly. However in case they send requests +to a remote server, and such a request needs to be retried, this function +can be used to extend the timeout for the current callback invocation. The +timeout is given in seconds from the point in time when the function is +called. Keyword arguments: * actx -- the authorization context -* timeout\_secs -- timeout value +* timeout_secs -- timeout value ### connect @@ -272,18 +316,19 @@ Keyword arguments: connect(dx, sock, type, ip, port, path) -> None ``` -Connects to the ConfD daemon. The socket instance provided via the 'sock' argument must be kept alive during the lifetime of the daemon context. +Connects to the ConfD daemon. The socket instance provided via the 'sock' +argument must be kept alive during the lifetime of the daemon context. Keyword arguments: -* dx -- a daemon context acquired through a call to init\_daemon() +* dx -- a daemon context acquired through a call to init_daemon() * sock -- a Python socket instance -* type -- the socket type (CONTROL\_SOCKET or WORKER\_SOCKET) -* ip -- the ip address if socket is AF\_INET (optional) -* port -- the port if socket is AF\_INET (optional) -* path -- a filename if socket is AF\_UNIX (optional). +* type -- the socket type (CONTROL_SOCKET or WORKER_SOCKET) +* ip -- the ip address if socket is AF_INET (optional) +* port -- the port if socket is AF_INET (optional) +* path -- a filename if socket is AF_UNIX (optional). -### data\_get\_list\_filter +### data_get_list_filter ```python data_get_list_filter(tctx) -> ListFilter @@ -295,154 +340,170 @@ Keyword arguments: * tctx -- a transaction context -### data\_reply\_attrs +### data_reply_attrs ```python data_reply_attrs(tctx, attrs) -> None ``` -This function is used by the cb\_get\_attrs() callback to return the requested attribute values. +This function is used by the cb_get_attrs() callback to return the +requested attribute values. Keyword arguments: * tctx -- a transaction context -* attrs -- a list of \_lib.AttrValue instances +* attrs -- a list of _lib.AttrValue instances -### data\_reply\_found +### data_reply_found ```python data_reply_found(tctx) -> None ``` -This function is used by the cb\_exists\_optional() callback to indicate to ConfD that a node does exist. +This function is used by the cb_exists_optional() callback to indicate to +ConfD that a node does exist. Keyword arguments: * tctx -- a transaction context -### data\_reply\_next\_key +### data_reply_next_key ```python data_reply_next_key(tctx, keys, next) -> None ``` -This function is used by the cb\_get\_next() and cb\_find\_next() callbacks to return the next key. +This function is used by the cb_get_next() and cb_find_next() callbacks to +return the next key. Keyword arguments: * tctx -- a transaction context -* keys -- a list of keys of \_lib.Value for a list item (se below) -* next -- int value passed to the next invocation of cb\_get\_next() callback +* keys -- a list of keys of _lib.Value for a list item (se below) +* next -- int value passed to the next invocation of cb_get_next() callback -A list may have mutiple key leafs specified in the data model. This is why the keys argument must be a list. +A list may have mutiple key leafs specified in the data model. This is why +the keys argument must be a list. -### data\_reply\_next\_object\_array +### data_reply_next_object_array ```python data_reply_next_object_array(tctx, v, next) -> None ``` -This function is used by the optional cb\_get\_next\_object() and cb\_find\_next\_object() callbacks to return an entire object including its keys. It combines the functions of data\_reply\_next\_key() and data\_reply\_value\_array(). +This function is used by the optional cb_get_next_object() and +cb_find_next_object() callbacks to return an entire object including its keys. +It combines the functions of data_reply_next_key() and +data_reply_value_array(). Keyword arguments: * tctx -- a transaction context -* v -- a list of \_lib.Value instances -* next -- int value passed to the next invocation of cb\_get\_next() callback +* v -- a list of _lib.Value instances +* next -- int value passed to the next invocation of cb_get_next() callback -### data\_reply\_next\_object\_arrays +### data_reply_next_object_arrays ```python data_reply_next_object_arrays(tctx, objs, timeout_millisecs) -> None ``` -This function is used by the optional cb\_get\_next\_object() and cb\_find\_next\_object() callbacks to return multiple objects including their keys, in \_lib.Value form. +This function is used by the optional cb_get_next_object() and +cb_find_next_object() callbacks to return multiple objects including their +keys, in _lib.Value form. Keyword arguments: * tctx -- a transaction context * objs -- a list of tuples or None (see below) -* timeout\_millisecs -- timeout value for ConfD's caching of returned data +* timeout_millisecs -- timeout value for ConfD's caching of returned data -The format of argument objs is list(tuple(list(\_lib.Value), long)), or None to indicate end of list. Another way to indicate end of list is to include None as the first item in the 2-tuple last in the list. +The format of argument objs is list(tuple(list(_lib.Value), long)), or +None to indicate end of list. Another way to indicate end of list is to +include None as the first item in the 2-tuple last in the list. E.g.: -``` -V = _lib.Value -objs = [ - ( [ V(1), V(2) ], next1 ), - ( [ V(3), V(4) ], next2 ), - ( None, -1 ) - ] -``` + V = _lib.Value + objs = [ + ( [ V(1), V(2) ], next1 ), + ( [ V(3), V(4) ], next2 ), + ( None, -1 ) + ] -### data\_reply\_next\_object\_tag\_value\_array +### data_reply_next_object_tag_value_array ```python data_reply_next_object_tag_value_array(tctx, tvs, next) -> None ``` -This function is used by the optional cb\_get\_next\_object() and cb\_find\_next\_object() callbacks to return an entire object including its keys +This function is used by the optional cb_get_next_object() and +cb_find_next_object() callbacks to return an entire object including its keys Keyword arguments: * tctx -- a transaction context -* tvs -- a list of \_lib.TagValue instances or None -* next -- int value passed to the next invocation of cb\_get\_next\_object() callback +* tvs -- a list of _lib.TagValue instances or None +* next -- int value passed to the next invocation of cb_get_next_object() + callback -### data\_reply\_next\_object\_tag\_value\_arrays +### data_reply_next_object_tag_value_arrays ```python data_reply_next_object_tag_value_arrays(tctx, objs, timeout_millisecs) -> None ``` -This function is used by the optional cb\_get\_next\_object() and cb\_find\_next\_object() callbacks to return multiple objects including their keys. +This function is used by the optional cb_get_next_object() and +cb_find_next_object() callbacks to return multiple objects including their +keys. Keyword arguments: * tctx -- a transaction context * objs -- a list of tuples or None (see below) -* timeout\_millisecs -- timeout value for ConfD's caching of returned data +* timeout_millisecs -- timeout value for ConfD's caching of returned data -The format of argument objs is list(tuple(list(\_lib.TagValue), long)) or None to indicate end of list. Another way to indicate end of list is to include None as the first item in the 2-tuple last in the list. +The format of argument objs is list(tuple(list(_lib.TagValue), long)) or +None to indicate end of list. Another way to indicate end of list is to +include None as the first item in the 2-tuple last in the list. E.g.: -``` -objs = [ - ( [ tagval1, tagval2 ], next1 ), - ( [ tagval3, tagval4, tagval5 ], next2 ), - ( None, -1 ) - ] -``` + objs = [ + ( [ tagval1, tagval2 ], next1 ), + ( [ tagval3, tagval4, tagval5 ], next2 ), + ( None, -1 ) + ] -### data\_reply\_not\_found +### data_reply_not_found ```python data_reply_not_found(tctx) -> None ``` -This function is used by the cb\_get\_elem() and cb\_exists\_optional() callbacks to indicate to ConfD that a list entry or node does not exist. +This function is used by the cb_get_elem() and cb_exists_optional() +callbacks to indicate to ConfD that a list entry or node does not exist. Keyword arguments: * tctx -- a transaction context -### data\_reply\_tag\_value\_array +### data_reply_tag_value_array ```python data_reply_tag_value_array(tctx, tvs) -> None ``` -This function is used to return an array of values, corresponding to a complete list entry, to ConfD. It can be used by the optional cb\_get\_object() callback. +This function is used to return an array of values, corresponding to a +complete list entry, to ConfD. It can be used by the optional +cb_get_object() callback. Keyword arguments: * tctx -- a transaction context -* tvs -- a list of \_lib.TagValue instances or None +* tvs -- a list of _lib.TagValue instances or None -### data\_reply\_value +### data_reply_value ```python data_reply_value(tctx, v) -> None @@ -453,48 +514,60 @@ This function is used to return a single data item to ConfD. Keyword arguments: * tctx -- a transaction context -* v -- a \_lib.Value instance +* v -- a _lib.Value instance -### data\_reply\_value\_array +### data_reply_value_array ```python data_reply_value_array(tctx, vs) -> None ``` -This function is used to return an array of values, corresponding to a complete list entry, to ConfD. It can be used by the optional cb\_get\_object() callback. +This function is used to return an array of values, corresponding to a +complete list entry, to ConfD. It can be used by the optional +cb_get_object() callback. Keyword arguments: * tctx -- a transaction context -* vs -- a list of \_lib.Value instances +* vs -- a list of _lib.Value instances -### data\_set\_timeout +### data_set_timeout ```python data_set_timeout(tctx, timeout_secs) -> None ``` -A data callback should normally complete quickly, since e.g. the execution of a 'show' command in the CLI may require many data callback invocations. In some rare cases it may still be necessary for a data callback to have a longer execution time, and then this function can be used to extend (or shorten) the timeout for the current callback invocation. The timeout is given in seconds from the point in time when the function is called. +A data callback should normally complete quickly, since e.g. the +execution of a 'show' command in the CLI may require many data callback +invocations. In some rare cases it may still be necessary for a data +callback to have a longer execution time, and then this function can be +used to extend (or shorten) the timeout for the current callback invocation. +The timeout is given in seconds from the point in time when the function is +called. Keyword arguments: * tctx -- a transaction context -* timeout\_secs -- timeout value +* timeout_secs -- timeout value -### db\_set\_timeout +### db_set_timeout ```python db_set_timeout(dbx, timeout_secs) -> None ``` -Some of the DB callbacks registered via register\_db\_cb(), e.g. cb\_copy\_running\_to\_startup(), may require a longer execution time than others. This function can be used to extend the timeout for the current callback invocation. The timeout is given in seconds from the point in time when the function is called. +Some of the DB callbacks registered via register_db_cb(), e.g. +cb_copy_running_to_startup(), may require a longer execution time than +others. This function can be used to extend the timeout for the current +callback invocation. The timeout is given in seconds from the point in +time when the function is called. Keyword arguments: * dbx -- a db context of DbCtxRef -* timeout\_secs -- timeout value +* timeout_secs -- timeout value -### db\_seterr +### db_seterr ```python db_seterr(dbx, errstr) -> None @@ -507,104 +580,115 @@ Keyword arguments: * dbx -- a db context * errstr -- an error message string -### db\_seterr\_extended +### db_seterr_extended ```python db_seterr_extended(dbx, code, apptag_ns, apptag_tag, errstr) -> None ``` -This function can be used to provide more structured error information from a db callback. +This function can be used to provide more structured error information +from a db callback. Keyword arguments: * dbx -- a db context * code -- an error code -* apptag\_ns -- namespace - should be set to 0 -* apptag\_tag -- either 0 or the hash value for a data model node +* apptag_ns -- namespace - should be set to 0 +* apptag_tag -- either 0 or the hash value for a data model node * errstr -- an error message string -### db\_seterr\_extended\_info +### db_seterr_extended_info ```python db_seterr_extended_info(dbx, code, apptag_ns, apptag_tag, error_info, errstr) -> None ``` -This function can be used to provide structured error information in the same way as db\_seterr\_extended(), and additionally provide contents for the NETCONF element. +This function can be used to provide structured error information in the +same way as db_seterr_extended(), and additionally provide contents for +the NETCONF element. Keyword arguments: * dbx -- a db context * code -- an error code -* apptag\_ns -- namespace - should be set to 0 -* apptag\_tag -- either 0 or the hash value for a data model node -* error\_info -- a list of \_lib.TagValue instances +* apptag_ns -- namespace - should be set to 0 +* apptag_tag -- either 0 or the hash value for a data model node +* error_info -- a list of _lib.TagValue instances * errstr -- an error message string -### delayed\_reply\_error +### delayed_reply_error ```python delayed_reply_error(tctx, errstr) -> None ``` -This function must be used to return an error when tha actual callback returned CONFD\_DELAYED\_RESPONSE. +This function must be used to return an error when tha actual callback +returned CONFD_DELAYED_RESPONSE. Keyword arguments: * tctx -- a transaction context * errstr -- an error message string -### delayed\_reply\_ok +### delayed_reply_ok ```python delayed_reply_ok(tctx) -> None ``` -This function must be used to return the equivalent of CONFD\_OK when the actual callback returned CONFD\_DELAYED\_RESPONSE. +This function must be used to return the equivalent of CONFD_OK when the +actual callback returned CONFD_DELAYED_RESPONSE. Keyword arguments: * tctx -- a transaction context -### delayed\_reply\_validation\_warn +### delayed_reply_validation_warn ```python delayed_reply_validation_warn(tctx) -> None ``` -This function must be used to return the equivalent of CONFD\_VALIDATION\_WARN when the cb\_validate() callback returned CONFD\_DELAYED\_RESPONSE. +This function must be used to return the equivalent of CONFD_VALIDATION_WARN +when the cb_validate() callback returned CONFD_DELAYED_RESPONSE. Keyword arguments: * tctx -- a transaction context -### error\_seterr +### error_seterr ```python error_seterr(uinfo, errstr) -> None ``` -This function must be called by format\_error() (above) to provide a replacement for the default error message. If format\_error() is called without calling error\_seterr() the default message will be used. +This function must be called by format_error() (above) to provide a + replacement for the default error message. If format_error() is called + without calling error_seterr() the default message will be used. Keyword arguments: * uinfo -- a user info context * errstr -- an string describing the error -### fd\_ready +### fd_ready ```python fd_ready(dx, sock) -> None ``` -The database application owns all data provider sockets to ConfD and is responsible for the polling of these sockets. When one of the ConfD sockets has I/O ready to read, the application must invoke fd\_ready() on the socket. +The database application owns all data provider sockets to ConfD and is +responsible for the polling of these sockets. When one of the ConfD +sockets has I/O ready to read, the application must invoke fd_ready() on +the socket. Keyword arguments: -* dx -- a daemon context acquired through a call to init\_daemon() +* dx -- a daemon context acquired through a call to init_daemon() * sock -- the socket -### init\_daemon +### init_daemon ```python init_daemon(name) -> DaemonCtxRef @@ -616,276 +700,323 @@ Keyword arguments: * name -- a string used to uniquely identify the daemon -### install\_crypto\_keys +### install_crypto_keys ```python install_crypto_keys(dtx) -> None ``` -It is possible to define AES keys inside confd.conf. These keys are used by ConfD to encrypt data which is entered into the system. The supported types are tailf:aes-cfb-128-encrypted-string and tailf:aes-256-cfb-128-encrypted-string. This function will copy those keys from ConfD (which reads confd.conf) into memory in the library. +It is possible to define AES keys inside confd.conf. These keys +are used by ConfD to encrypt data which is entered into the system. +The supported types are tailf:aes-cfb-128-encrypted-string and +tailf:aes-256-cfb-128-encrypted-string. +This function will copy those keys from ConfD (which reads confd.conf) into +memory in the library. -This function must be called before register\_done() is called. +This function must be called before register_done() is called. Keyword arguments: * dtx -- a daemon context wich is connected through a call to connect() -### nano\_service\_reply\_proplist +### nano_service_reply_proplist ```python nano_service_reply_proplist(tctx, proplist) -> None ``` -This function must be called with the new property list, immediately prior to returning from the callback, if the stored property list should be updated. If a callback returns without calling nano\_service\_reply\_proplist(), the previous property list is retained. To completely delete the property list, call this function with the proplist argument set to an empty list or None. +This function must be called with the new property list, immediately prior +to returning from the callback, if the stored property list should be +updated. If a callback returns without calling nano_service_reply_proplist(), +the previous property list is retained. To completely delete the property +list, call this function with the proplist argument set to an empty list or +None. -The proplist argument should be a list of 2-tuples built up like this: list( (name, value), (name, value), ... ) In a 2-tuple both 'name' and 'value' must be strings. +The proplist argument should be a list of 2-tuples built up like this: + list( (name, value), (name, value), ... ) +In a 2-tuple both 'name' and 'value' must be strings. Keyword arguments: * tctx -- a transaction context * proplist -- a list of properties or None -### notification\_flush +### notification_flush ```python notification_flush(nctx) -> None ``` -Notifications are sent asynchronously, i.e. normally without blocking the caller of the send functions described above. This means that in some cases ConfD's sending of the notifications on the northbound interfaces may lag behind the send calls. This function can be used to make sure that the notifications have actually been sent out. +Notifications are sent asynchronously, i.e. normally without blocking the +caller of the send functions described above. This means that in some cases +ConfD's sending of the notifications on the northbound interfaces may lag +behind the send calls. This function can be used to make sure that the +notifications have actually been sent out. Keyword arguments: -* nctx -- notification context returned from register\_notification\_stream() +* nctx -- notification context returned from register_notification_stream() -### notification\_replay\_complete +### notification_replay_complete ```python notification_replay_complete(nctx) -> None ``` -The application calls this function to notify ConfD that the replay is complete +The application calls this function to notify ConfD that the replay is +complete Keyword arguments: -* nctx -- notification context returned from register\_notification\_stream() +* nctx -- notification context returned from register_notification_stream() -### notification\_replay\_failed +### notification_replay_failed ```python notification_replay_failed(nctx) -> None ``` -In case the application fails to complete the replay as requested (e.g. the log gets overwritten while the replay is in progress), the application should call this function instead of notification\_replay\_complete(). An error message describing the reason for the failure can be supplied by first calling notification\_seterr() or notification\_seterr\_extended(). +In case the application fails to complete the replay as requested (e.g. the +log gets overwritten while the replay is in progress), the application +should call this function instead of notification_replay_complete(). An +error message describing the reason for the failure can be supplied by +first calling notification_seterr() or notification_seterr_extended(). Keyword arguments: -* nctx -- notification context returned from register\_notification\_stream() +* nctx -- notification context returned from register_notification_stream() -### notification\_reply\_log\_times +### notification_reply_log_times ```python notification_reply_log_times(nctx, creation, aged) -> None ``` -Reply function for use in the cb\_get\_log\_times() callback invocation. If no notifications have been aged out of the log, give None for the aged argument. +Reply function for use in the cb_get_log_times() callback invocation. If no +notifications have been aged out of the log, give None for the aged argument. Keyword arguments: -* nctx -- notification context returned from register\_notification\_stream() -* creation -- a \_lib.DateTime instance -* aged -- a \_lib.DateTime instance or None +* nctx -- notification context returned from register_notification_stream() +* creation -- a _lib.DateTime instance +* aged -- a _lib.DateTime instance or None -### notification\_send +### notification_send ```python notification_send(nctx, time, values) -> None ``` -This function is called by the application to send a notification defined at the top level of a YANG module, whether "live" or replay. +This function is called by the application to send a notification defined +at the top level of a YANG module, whether "live" or replay. Keyword arguments: -* nctx -- notification context returned from register\_notification\_stream() -* time -- a \_lib.DateTime instance -* values -- a list of \_lib.TagValue instances or None +* nctx -- notification context returned from register_notification_stream() +* time -- a _lib.DateTime instance +* values -- a list of _lib.TagValue instances or None -### notification\_send\_path +### notification_send_path ```python notification_send_path(nctx, time, values, path) -> None ``` -This function is called by the application to send a notification defined as a child of a container or list in a YANG 1.1 module, whether "live" or replay. +This function is called by the application to send a notification defined +as a child of a container or list in a YANG 1.1 module, whether "live" or +replay. Keyword arguments: -* nctx -- notification context returned from register\_notification\_stream() -* time -- a \_lib.DateTime instance -* values -- a list of \_lib.TagValue instances or None +* nctx -- notification context returned from register_notification_stream() +* time -- a _lib.DateTime instance +* values -- a list of _lib.TagValue instances or None * path -- path to the parent of the notification in the data tree -### notification\_send\_snmp +### notification_send_snmp ```python notification_send_snmp(nctx, notification, varbinds) -> None ``` -Sends the SNMP notification specified by 'notification', without requesting inform-request delivery information. This is equivalent to calling notification\_send\_snmp\_inform() with None as the cb\_id argument. I.e. if the common arguments are the same, the two functions will send the exact same set of traps and inform-requests. +Sends the SNMP notification specified by 'notification', without requesting +inform-request delivery information. This is equivalent to calling +notification_send_snmp_inform() with None as the cb_id argument. I.e. if +the common arguments are the same, the two functions will send the exact +same set of traps and inform-requests. Keyword arguments: -* nctx -- notification context returned from register\_snmp\_notification() +* nctx -- notification context returned from register_snmp_notification() * notification -- the notification string -* varbinds -- a list of \_lib.SnmpVarbind instances or None +* varbinds -- a list of _lib.SnmpVarbind instances or None -### notification\_send\_snmp\_inform +### notification_send_snmp_inform ```python notification_send_snmp_inform(nctx, notification, varbinds, cb_id, ref) ->None ``` -Sends the SNMP notification specified by notification. If cb\_id is not None the callbacks registered for cb\_id will be invoked with the ref argument. +Sends the SNMP notification specified by notification. If cb_id is not None +the callbacks registered for cb_id will be invoked with the ref argument. Keyword arguments: -* nctx -- notification context returned from register\_snmp\_notification() +* nctx -- notification context returned from register_snmp_notification() * notification -- the notification string -* varbinds -- a list of \_lib.SnmpVarbind instances or None -* cb\_id -- callback id +* varbinds -- a list of _lib.SnmpVarbind instances or None +* cb_id -- callback id * ref -- argument send to callbacks -### notification\_set\_fd +### notification_set_fd ```python notification_set_fd(nctx, sock) -> None ``` -This function may optionally be called by the cb\_replay() callback to request that the worker socket given by 'sock' should be used for the replay. Otherwise the socket specified in register\_notification\_stream() will be used. +This function may optionally be called by the cb_replay() callback to +request that the worker socket given by 'sock' should be used for the +replay. Otherwise the socket specified in register_notification_stream() +will be used. Keyword arguments: -* nctx -- notification context returned from register\_notification\_stream() +* nctx -- notification context returned from register_notification_stream() * sock -- a previously connected worker socket -### notification\_set\_snmp\_notify\_name +### notification_set_snmp_notify_name ```python notification_set_snmp_notify_name(nctx, notify_name) -> None ``` -This function can be used to change the snmpNotifyName (notify\_name) for the nctx context. +This function can be used to change the snmpNotifyName (notify_name) for +the nctx context. Keyword arguments: -* nctx -- notification context returned from register\_snmp\_notification() -* notify\_name -- the snmpNotifyName +* nctx -- notification context returned from register_snmp_notification() +* notify_name -- the snmpNotifyName -### notification\_set\_snmp\_src\_addr +### notification_set_snmp_src_addr ```python notification_set_snmp_src_addr(nctx, family, src_addr) -> None ``` -By default, the source address for the SNMP notifications that are sent by the above functions is chosen by the IP stack of the OS. This function may be used to select a specific source address, given by src\_addr, for the SNMP notifications subsequently sent using the nctx context. The default can be restored by calling the function with family set to AF\_UNSPEC. +By default, the source address for the SNMP notifications that are sent by +the above functions is chosen by the IP stack of the OS. This function may +be used to select a specific source address, given by src_addr, for the +SNMP notifications subsequently sent using the nctx context. The default +can be restored by calling the function with family set to AF_UNSPEC. Keyword arguments: -* nctx -- notification context returned from register\_snmp\_notification() -* family -- AF\_INET, AF\_INET6 or AF\_UNSPEC -* src\_addr -- the source address in string format +* nctx -- notification context returned from register_snmp_notification() +* family -- AF_INET, AF_INET6 or AF_UNSPEC +* src_addr -- the source address in string format -### notification\_seterr +### notification_seterr ```python notification_seterr(nctx, errstr) -> None ``` -In some cases the callbacks may be unable to carry out the requested actions, e.g. the capacity for simultaneous replays might be exceeded, and they can then return CONFD\_ERR. This function allows the callback to associate an error message with the failure. It can also be used to supply an error message before calling notification\_replay\_failed(). +In some cases the callbacks may be unable to carry out the requested +actions, e.g. the capacity for simultaneous replays might be exceeded, and +they can then return CONFD_ERR. This function allows the callback to +associate an error message with the failure. It can also be used to supply +an error message before calling notification_replay_failed(). Keyword arguments: -* nctx -- notification context returned from register\_notification\_stream() +* nctx -- notification context returned from register_notification_stream() * errstr -- an error message string -### notification\_seterr\_extended +### notification_seterr_extended ```python notification_seterr_extended(nctx, code, apptag_ns, apptag_tag, errstr) ->None ``` -This function can be used to provide more structured error information from a notification callback. +This function can be used to provide more structured error information +from a notification callback. Keyword arguments: -* nctx -- notification context returned from register\_notification\_stream() +* nctx -- notification context returned from register_notification_stream() * code -- an error code -* apptag\_ns -- namespace - should be set to 0 -* apptag\_tag -- either 0 or the hash value for a data model node +* apptag_ns -- namespace - should be set to 0 +* apptag_tag -- either 0 or the hash value for a data model node * errstr -- an error message string -### notification\_seterr\_extended\_info +### notification_seterr_extended_info ```python notification_seterr_extended_info(nctx, code, apptag_ns, apptag_tag, error_info, errstr) -> None ``` -This function can be used to provide structured error information in the same way as notification\_seterr\_extended(), and additionally provide contents for the NETCONF element. +This function can be used to provide structured error information in the +same way as notification_seterr_extended(), and additionally provide +contents for the NETCONF element. Keyword arguments: -* nctx -- notification context returned from register\_notification\_stream() +* nctx -- notification context returned from register_notification_stream() * code -- an error code -* apptag\_ns -- namespace - should be set to 0 -* apptag\_tag -- either 0 or the hash value for a data model node -* error\_info -- a list of \_lib.TagValue instances +* apptag_ns -- namespace - should be set to 0 +* apptag_tag -- either 0 or the hash value for a data model node +* error_info -- a list of _lib.TagValue instances * errstr -- an error message string -### register\_action\_cbs +### register_action_cbs ```python register_action_cbs(dx, actionpoint, acb) -> None ``` -This function registers up to five callback functions, two of which will be called in sequence when an action is invoked. +This function registers up to five callback functions, two of which will +be called in sequence when an action is invoked. Keyword arguments: -* dx -- a daemon context acquired through a call to init\_daemon() +* dx -- a daemon context acquired through a call to init_daemon() * actionpoint -- the name of the action point * vcb -- the callback instance (see below) -The acb argument should be an instance of a class with callback methods. E.g.: +The acb argument should be an instance of a class with callback methods. +E.g.: -``` -class ActionCallbacks(object): - def cb_init(self, uinfo): - pass + class ActionCallbacks(object): + def cb_init(self, uinfo): + pass - def cb_abort(self, uinfo): - pass + def cb_abort(self, uinfo): + pass - def cb_action(self, uinfo, name, kp, params): - pass + def cb_action(self, uinfo, name, kp, params): + pass - def cb_command(self, uinfo, path, argv): - pass + def cb_command(self, uinfo, path, argv): + pass - def cb_completion(self, uinfo, cli_style, token, completion_char, - kp, cmdpath, cmdparam_id, simpleType, extra): - pass + def cb_completion(self, uinfo, cli_style, token, completion_char, + kp, cmdpath, cmdparam_id, simpleType, extra): + pass -acb = ActionCallbacks() -dp.register_action_cbs(dx, 'actionpoint-1', acb) -``` + acb = ActionCallbacks() + dp.register_action_cbs(dx, 'actionpoint-1', acb) Notes about some of the callbacks: -cb\_action() The params argument is a list of \_lib.TagValue instances. +cb_action() + The params argument is a list of _lib.TagValue instances. -cb\_command() The argv argument is a list of strings. +cb_command() + The argv argument is a list of strings. -### register\_auth\_cb +### register_auth_cb ```python register_auth_cb(dx, acb) -> None @@ -895,21 +1026,19 @@ Registers the authentication callback. Keyword arguments: -* dx -- a daemon context acquired through a call to init\_daemon() +* dx -- a daemon context acquired through a call to init_daemon() * abc -- the callback instance (see below) E.g.: -``` -class AuthCallbacks(object): - def cb_auth(self, actx): - pass + class AuthCallbacks(object): + def cb_auth(self, actx): + pass -acb = AuthCallbacks() -dp.register_auth_cb(dx, acb) -``` + acb = AuthCallbacks() + dp.register_auth_cb(dx, acb) -### register\_authorization\_cb +### register_authorization_cb ```python register_authorization_cb(dx, acb, cmd_filter, data_filter) -> None @@ -917,26 +1046,24 @@ register_authorization_cb(dx, acb, cmd_filter, data_filter) -> None Keyword arguments: -* dx -- a daemon context acquired through a call to init\_daemon() +* dx -- a daemon context acquired through a call to init_daemon() * abc -- the callback instance (see below) -* cmd\_filter -- set to 0 for no filtering -* data\_filter -- set to 0 for no filtering +* cmd_filter -- set to 0 for no filtering +* data_filter -- set to 0 for no filtering E.g.: -``` -class AuthorizationCallbacks(object): - def cb_chk_cmd_access(self, actx, cmdtokens, cmdop): - pass + class AuthorizationCallbacks(object): + def cb_chk_cmd_access(self, actx, cmdtokens, cmdop): + pass - def cb_chk_data_access(self, actx, hashed_ns, hkp, dataop, how): - pass + def cb_chk_data_access(self, actx, hashed_ns, hkp, dataop, how): + pass -acb = AuthCallbacks() -dp.register_authorization_cb(dx, acb) -``` + acb = AuthCallbacks() + dp.register_authorization_cb(dx, acb) -### register\_data\_cb +### register_data_cb ```python register_data_cb(dx, callpoint, data, flags) -> None @@ -946,179 +1073,180 @@ Registers data manipulation callback functions. Keyword arguments: -* dx -- a daemon context acquired through a call to init\_daemon() +* dx -- a daemon context acquired through a call to init_daemon() * callpoint -- name of a tailf:callpoint in the data model * data -- the callback instance (see below) -* flags -- data callbacks flags, dp.DATA\_\* (optional) +* flags -- data callbacks flags, dp.DATA_* (optional) -The data argument should be an instance of a class with callback methods. E.g.: +The data argument should be an instance of a class with callback methods. +E.g.: -``` -class DataCallbacks(object): - def cb_exists_optional(self, tctx, kp): - pass + class DataCallbacks(object): + def cb_exists_optional(self, tctx, kp): + pass - def cb_get_elem(self, tctx, kp): - pass + def cb_get_elem(self, tctx, kp): + pass - def cb_get_next(self, tctx, kp, next): - pass + def cb_get_next(self, tctx, kp, next): + pass - def cb_set_elem(self, tctx, kp, newval): - pass + def cb_set_elem(self, tctx, kp, newval): + pass - def cb_create(self, tctx, kp): - pass + def cb_create(self, tctx, kp): + pass - def cb_remove(self, tctx, kp): - pass + def cb_remove(self, tctx, kp): + pass - def cb_find_next(self, tctx, kp, type, keys): - pass + def cb_find_next(self, tctx, kp, type, keys): + pass - def cb_num_instances(self, tctx, kp): - pass + def cb_num_instances(self, tctx, kp): + pass - def cb_get_object(self, tctx, kp): - pass + def cb_get_object(self, tctx, kp): + pass - def cb_get_next_object(self, tctx, kp, next): - pass + def cb_get_next_object(self, tctx, kp, next): + pass - def cb_find_next_object(self, tctx, kp, type, keys): - pass + def cb_find_next_object(self, tctx, kp, type, keys): + pass - def cb_get_case(self, tctx, kp, choice): - pass + def cb_get_case(self, tctx, kp, choice): + pass - def cb_set_case(self, tctx, kp, choice, caseval): - pass + def cb_set_case(self, tctx, kp, choice, caseval): + pass - def cb_get_attrs(self, tctx, kp, attrs): - pass + def cb_get_attrs(self, tctx, kp, attrs): + pass - def cb_set_attr(self, tctx, kp, attr, v): - pass + def cb_set_attr(self, tctx, kp, attr, v): + pass - def cb_move_after(self, tctx, kp, prevkeys): - pass + def cb_move_after(self, tctx, kp, prevkeys): + pass - def cb_write_all(self, tctx, kp): - pass + def cb_write_all(self, tctx, kp): + pass -dcb = DataCallbacks() -dp.register_data_cb(dx, 'example-callpoint-1', dcb) -``` + dcb = DataCallbacks() + dp.register_data_cb(dx, 'example-callpoint-1', dcb) -### register\_db\_cb +### register_db_cb ```python register_db_cb(dx, dbcbs) -> None ``` -This function is used to set callback functions which span over several ConfD transactions. +This function is used to set callback functions which span over several +ConfD transactions. Keyword arguments: -* dx -- a daemon context acquired through a call to init\_daemon() +* dx -- a daemon context acquired through a call to init_daemon() * dbcbs -- the callback instance (see below) -The dbcbs argument should be an instance of a class with callback methods. E.g.: +The dbcbs argument should be an instance of a class with callback methods. +E.g.: -``` -class DbCallbacks(object): - def cb_candidate_commit(self, dbx, timeout): - pass + class DbCallbacks(object): + def cb_candidate_commit(self, dbx, timeout): + pass - def cb_candidate_confirming_commit(self, dbx): - pass + def cb_candidate_confirming_commit(self, dbx): + pass - def cb_candidate_reset(self, dbx): - pass + def cb_candidate_reset(self, dbx): + pass - def cb_candidate_chk_not_modified(self, dbx): - pass + def cb_candidate_chk_not_modified(self, dbx): + pass - def cb_candidate_rollback_running(self, dbx): - pass + def cb_candidate_rollback_running(self, dbx): + pass - def cb_candidate_validate(self, dbx): - pass + def cb_candidate_validate(self, dbx): + pass - def cb_add_checkpoint_running(self, dbx): - pass + def cb_add_checkpoint_running(self, dbx): + pass - def cb_del_checkpoint_running(self, dbx): - pass + def cb_del_checkpoint_running(self, dbx): + pass - def cb_activate_checkpoint_running(self, dbx): - pass + def cb_activate_checkpoint_running(self, dbx): + pass - def cb_copy_running_to_startup(self, dbx): - pass + def cb_copy_running_to_startup(self, dbx): + pass - def cb_running_chk_not_modified(self, dbx): - pass + def cb_running_chk_not_modified(self, dbx): + pass - def cb_lock(self, dbx, dbname): - pass + def cb_lock(self, dbx, dbname): + pass - def cb_unlock(self, dbx, dbname): - pass + def cb_unlock(self, dbx, dbname): + pass - def cb_lock_partial(self, dbx, dbname, lockid, paths): - pass + def cb_lock_partial(self, dbx, dbname, lockid, paths): + pass - def cb_ulock_partial(self, dbx, dbname, lockid): - pass + def cb_ulock_partial(self, dbx, dbname, lockid): + pass - def cb_delete_confid(self, dbx, dbname): - pass + def cb_delete_confid(self, dbx, dbname): + pass -dbcbs = DbCallbacks() -dp.register_db_cb(dx, dbcbs) -``` + dbcbs = DbCallbacks() + dp.register_db_cb(dx, dbcbs) -### register\_done +### register_done ```python register_done(dx) -> None ``` -When we have registered all the callbacks for a daemon (including the other types described below if we have them), we must call this function to synchronize with ConfD. No callbacks will be invoked until it has been called, and after the call, no further registrations are allowed. +When we have registered all the callbacks for a daemon (including the other +types described below if we have them), we must call this function to +synchronize with ConfD. No callbacks will be invoked until it has been +called, and after the call, no further registrations are allowed. Keyword arguments: -* dx -- a daemon context acquired through a call to init\_daemon() +* dx -- a daemon context acquired through a call to init_daemon() -### register\_error\_cb +### register_error_cb ```python register_error_cb(dx, errortypes, ecbs) -> None ``` -This funciton can be used to register error callbacks that are invoked for internally generated errors. +This funciton can be used to register error callbacks that are +invoked for internally generated errors. Keyword arguments: -* dx -- a daemon context acquired through a call to init\_daemon() +* dx -- a daemon context acquired through a call to init_daemon() * errortypes -- logical OR of the error types that the ecbs should handle * ecbs -- the callback instance (see below) E.g.: -``` -class ErrorCallbacks(object): - def cb_format_error(self, uinfo, errinfo_dict, default_msg): - dp.error_seterr(uinfo, default_msg) -ecbs = ErrorCallbacks() -dp.register_error_cb(ctx, - dp.ERRTYPE_BAD_VALUE | - dp.ERRTYPE_MISC, ecbs) -dp.register_done(ctx) -``` + class ErrorCallbacks(object): + def cb_format_error(self, uinfo, errinfo_dict, default_msg): + dp.error_seterr(uinfo, default_msg) + ecbs = ErrorCallbacks() + dp.register_error_cb(ctx, + dp.ERRTYPE_BAD_VALUE | + dp.ERRTYPE_MISC, ecbs) + dp.register_done(ctx) -### register\_nano\_service\_cb +### register_nano_service_cb ```python register_nano_service_cb(dx,servicepoint,componenttype,state,nscb) -> None @@ -1128,7 +1256,7 @@ This function registers the nano service callbacks. Keyword arguments: -* dx -- a daemon context acquired through a call to init\_daemon() +* dx -- a daemon context acquired through a call to init_daemon() * servicepoint -- name of the service point (string) * componenttype -- name of the plan component for the nano service (string) * state -- name of component state for the nano service (string) @@ -1136,159 +1264,161 @@ Keyword arguments: E.g: -``` -class NanoServiceCallbacks(object): - def cb_nano_create(self, tctx, root, service, plan, - component, state, proplist, compproplist): - pass + class NanoServiceCallbacks(object): + def cb_nano_create(self, tctx, root, service, plan, + component, state, proplist, compproplist): + pass - def cb_nano_delete(self, tctx, root, service, plan, - component, state, proplist, compproplist): - pass + def cb_nano_delete(self, tctx, root, service, plan, + component, state, proplist, compproplist): + pass -nscb = NanoServiceCallbacks() -dp.register_nano_service_cb(dx, 'service-point-1', 'comp', 'state', nscb) -``` + nscb = NanoServiceCallbacks() + dp.register_nano_service_cb(dx, 'service-point-1', 'comp', 'state', nscb) -### register\_notification\_snmp\_inform\_cb +### register_notification_snmp_inform_cb ```python register_notification_snmp_inform_cb(dx, cb_id, cbs) -> None ``` -If we want to receive information about the delivery of SNMP inform-requests, we must register two callbacks for this. +If we want to receive information about the delivery of SNMP +inform-requests, we must register two callbacks for this. Keyword arguments: -* dx -- a daemon context acquired through a call to init\_daemon() -* cb\_id -- the callback identifier +* dx -- a daemon context acquired through a call to init_daemon() +* cb_id -- the callback identifier * cbs -- the callback instance (see below) E.g.: -``` -class NotifySnmpCallbacks(object): - def cb_targets(self, nctx, ref, targets): - pass + class NotifySnmpCallbacks(object): + def cb_targets(self, nctx, ref, targets): + pass - def cb_result(self, nctx, ref, target, got_response): - pass + def cb_result(self, nctx, ref, target, got_response): + pass -cbs = NotifySnmpCallbacks() -dp.register_notification_snmp_inform_cb(dx, 'callback-id-1', cbs) -``` + cbs = NotifySnmpCallbacks() + dp.register_notification_snmp_inform_cb(dx, 'callback-id-1', cbs) -### register\_notification\_stream +### register_notification_stream ```python register_notification_stream(dx, ncbs, sock, streamname) -> NotificationCtxRef ``` -This function registers the notification stream and optionally two callback functions used for the replay functionality. +This function registers the notification stream and optionally two callback +functions used for the replay functionality. -The returned notification context must be used by the application for the sending of live notifications via notification\_send() or notification\_send\_path(). +The returned notification context must be used by the application for the +sending of live notifications via notification_send() or +notification_send_path(). Keyword arguments: -* dx -- a daemon context acquired through a call to init\_daemon() +* dx -- a daemon context acquired through a call to init_daemon() * ncbs -- the callback instance (see below) * sock -- a previously connected worker socket * streamname -- the name of the notification stream E.g.: -``` -class NotificationCallbacks(object): - def cb_get_log_times(self, nctx): - pass + class NotificationCallbacks(object): + def cb_get_log_times(self, nctx): + pass - def cb_replay(self, nctx, start, stop): - pass + def cb_replay(self, nctx, start, stop): + pass -ncbs = NotificationCallbacks() -livectx = dp.register_notification_stream(dx, ncbs, workersock, -'streamname') -``` + ncbs = NotificationCallbacks() + livectx = dp.register_notification_stream(dx, ncbs, workersock, + 'streamname') -### register\_notification\_sub\_snmp\_cb +### register_notification_sub_snmp_cb ```python register_notification_sub_snmp_cb(dx, sub_id, cbs) -> None ``` -Registers a callback function to be called when an SNMP notification is received by the SNMP gateway. +Registers a callback function to be called when an SNMP notification is +received by the SNMP gateway. Keyword arguments: -* dx -- a daemon context acquired through a call to init\_daemon() -* sub\_id -- the subscription id for the notifications +* dx -- a daemon context acquired through a call to init_daemon() +* sub_id -- the subscription id for the notifications * cbs -- the callback instance (see below) E.g.: -``` -class NotifySubSnmpCallbacks(object): - def cb_recv(self, nctx, notification, varbinds, src_addr, port): - pass + class NotifySubSnmpCallbacks(object): + def cb_recv(self, nctx, notification, varbinds, src_addr, port): + pass -cbs = NotifySubSnmpCallbacks() -dp.register_notification_sub_snmp_cb(dx, 'sub-id-1', cbs) -``` + cbs = NotifySubSnmpCallbacks() + dp.register_notification_sub_snmp_cb(dx, 'sub-id-1', cbs) -### register\_range\_action\_cbs +### register_range_action_cbs ```python register_range_action_cbs(dx, actionpoint, acb, lower, upper, path) -> None ``` -A variant of register\_action\_cbs() which registers action callbacks for a range of key values. The lower, upper, and path arguments are the same as for register\_range\_data\_cb(). +A variant of register_action_cbs() which registers action callbacks for a +range of key values. The lower, upper, and path arguments are the same as +for register_range_data_cb(). Keyword arguments: -* dx -- a daemon context acquired through a call to init\_daemon() +* dx -- a daemon context acquired through a call to init_daemon() * actionpoint -- the name of the action point -* data -- the callback instance (see register\_action\_cbs()) +* data -- the callback instance (see register_action_cbs()) * lower -- a list of Value's or None * upper -- a list of Value's or None * path -- path for the list (string) -### register\_range\_data\_cb +### register_range_data_cb ```python register_range_data_cb(dx, callpoint, data, lower, upper, path, flags) -> None ``` -This is a variant of register\_data\_cb() which registers a set of callbacks for a range of list entries. +This is a variant of register_data_cb() which registers a set of callbacks +for a range of list entries. Keyword arguments: -* dx -- a daemon context acquired through a call to init\_daemon() +* dx -- a daemon context acquired through a call to init_daemon() * callpoint -- name of a tailf:callpoint in the data model -* data -- the callback instance (see register\_data\_cb()) +* data -- the callback instance (see register_data_cb()) * lower -- a list of Value's or None * upper -- a list of Value's or None * path -- path for the list (string) -* flags -- data callbacks flags, dp.DATA\_\* (optional) +* flags -- data callbacks flags, dp.DATA_* (optional) -### register\_range\_valpoint\_cb +### register_range_valpoint_cb ```python register_range_valpoint_cb(dx, valpoint, vcb, lower, upper, path) -> None ``` -A variant of register\_valpoint\_cb() which registers a validation function for a range of key values. The lower, upper and path arguments are the same as for register\_range\_data\_cb(). +A variant of register_valpoint_cb() which registers a validation function +for a range of key values. The lower, upper and path arguments are the same +as for register_range_data_cb(). Keyword arguments: -* dx -- a daemon context acquired through a call to init\_daemon() +* dx -- a daemon context acquired through a call to init_daemon() * valpoint -- name of a validation point -* data -- the callback instance (see register\_valpoint\_cb()) +* data -- the callback instance (see register_valpoint_cb()) * lower -- a list of Value's or None * upper -- a list of Value's or None * path -- path for the list (string) -### register\_service\_cb +### register_service_cb ```python register_service_cb(dx, servicepoint, scb) -> None @@ -1298,43 +1428,44 @@ This function registers the service callbacks. Keyword arguments: -* dx -- a daemon context acquired through a call to init\_daemon() +* dx -- a daemon context acquired through a call to init_daemon() * servicepoint -- name of the service point (string) * scb -- the callback instance (see below) E.g: -``` -class ServiceCallbacks(object): - def cb_create(self, tctx, kp, proplist, fastmap_thandle): - pass + class ServiceCallbacks(object): + def cb_create(self, tctx, kp, proplist, fastmap_thandle): + pass - def cb_pre_modification(self, tctx, op, kp, proplist): - pass + def cb_pre_modification(self, tctx, op, kp, proplist): + pass - def cb_post_modification(self, tctx, op, kp, proplist): - pass + def cb_post_modification(self, tctx, op, kp, proplist): + pass -scb = ServiceCallbacks() -dp.register_service_cb(dx, 'service-point-1', scb) -``` + scb = ServiceCallbacks() + dp.register_service_cb(dx, 'service-point-1', scb) -### register\_snmp\_notification +### register_snmp_notification ```python register_snmp_notification(dx, sock, notify_name, ctx_name) -> NotificationCtxRef ``` -SNMP notifications can also be sent via the notification framework, however most aspects of the stream concept do not apply for SNMP. This function is used to register a worker socket, the snmpNotifyName (notify\_name), and SNMP context (ctx\_name) to be used for the notifications. +SNMP notifications can also be sent via the notification framework, however +most aspects of the stream concept do not apply for SNMP. This function is +used to register a worker socket, the snmpNotifyName (notify_name), and +SNMP context (ctx_name) to be used for the notifications. Keyword arguments: -* dx -- a daemon context acquired through a call to init\_daemon() +* dx -- a daemon context acquired through a call to init_daemon() * sock -- a previously connected worker socket -* notify\_name -- the snmpNotifyName -* ctx\_name -- the SNMP context +* notify_name -- the snmpNotifyName +* ctx_name -- the SNMP context -### register\_trans\_cb +### register_trans_cb ```python register_trans_cb(dx, trans) -> None @@ -1344,188 +1475,198 @@ Registers transaction callback functions. Keyword arguments: -* dx -- a daemon context acquired through a call to init\_daemon() +* dx -- a daemon context acquired through a call to init_daemon() * trans -- the callback instance (see below) -The trans argument should be an instance of a class with callback methods. E.g.: +The trans argument should be an instance of a class with callback methods. +E.g.: -``` -class TransCallbacks(object): - def cb_init(self, tctx): - pass + class TransCallbacks(object): + def cb_init(self, tctx): + pass - def cb_trans_lock(self, tctx): - pass + def cb_trans_lock(self, tctx): + pass - def cb_trans_unlock(self, tctx): - pass + def cb_trans_unlock(self, tctx): + pass - def cb_write_start(self, tctx): - pass + def cb_write_start(self, tctx): + pass - def cb_prepare(self, tctx): - pass + def cb_prepare(self, tctx): + pass - def cb_abort(self, tctx): - pass + def cb_abort(self, tctx): + pass - def cb_commit(self, tctx): - pass + def cb_commit(self, tctx): + pass - def cb_finish(self, tctx): - pass + def cb_finish(self, tctx): + pass - def cb_interrupt(self, tctx): - pass + def cb_interrupt(self, tctx): + pass -tcb = TransCallbacks() -dp.register_trans_cb(dx, tcb) -``` + tcb = TransCallbacks() + dp.register_trans_cb(dx, tcb) -### register\_trans\_validate\_cb +### register_trans_validate_cb ```python register_trans_validate_cb(dx, vcbs) -> None ``` -This function installs two callback functions for the daemon context. One function that gets called when the validation phase starts in a transaction and one when the validation phase stops in a transaction. +This function installs two callback functions for the daemon context. One +function that gets called when the validation phase starts in a transaction +and one when the validation phase stops in a transaction. Keyword arguments: -* dx -- a daemon context acquired through a call to init\_daemon() +* dx -- a daemon context acquired through a call to init_daemon() * vcbs -- the callback instance (see below) -The vcbs argument should be an instance of a class with callback methods. E.g.: +The vcbs argument should be an instance of a class with callback methods. +E.g.: -``` -class TransValidateCallbacks(object): - def cb_init(self, tctx): - pass + class TransValidateCallbacks(object): + def cb_init(self, tctx): + pass - def cb_stop(self, tctx): - pass + def cb_stop(self, tctx): + pass -vcbs = TransValidateCallbacks() -dp.register_trans_validate_cb(dx, vcbs) -``` + vcbs = TransValidateCallbacks() + dp.register_trans_validate_cb(dx, vcbs) -### register\_usess\_cb +### register_usess_cb ```python register_usess_cb(dx, ucb) -> None ``` -This function can be used to register information callbacks that are invoked for user session start and stop. +This function can be used to register information callbacks that are +invoked for user session start and stop. Keyword arguments: -* dx -- a daemon context acquired through a call to init\_daemon() +* dx -- a daemon context acquired through a call to init_daemon() * ucb -- the callback instance (see below) E.g.: -``` -class UserSessionCallbacks(object): - def cb_start(self, dx, uinfo): - pass + class UserSessionCallbacks(object): + def cb_start(self, dx, uinfo): + pass - def cb_stop(self, dx, uinfo): - pass + def cb_stop(self, dx, uinfo): + pass -ucb = UserSessionCallbacks() -dp.register_usess_cb(dx, acb) -``` + ucb = UserSessionCallbacks() + dp.register_usess_cb(dx, acb) -### register\_valpoint\_cb +### register_valpoint_cb ```python register_valpoint_cb(dx, valpoint, vcb) -> None ``` -We must also install an actual validation function for each validation point, i.e. for each tailf:validate statement in the YANG data model. +We must also install an actual validation function for each validation +point, i.e. for each tailf:validate statement in the YANG data model. Keyword arguments: -* dx -- a daemon context acquired through a call to init\_daemon() +* dx -- a daemon context acquired through a call to init_daemon() * valpoint -- the name of the validation point * vcb -- the callback instance (see below) -The vcb argument should be an instance of a class with a callback method. E.g.: +The vcb argument should be an instance of a class with a callback method. +E.g.: -``` -class ValpointCallback(object): - def cb_validate(self, tctx, kp, newval): - pass + class ValpointCallback(object): + def cb_validate(self, tctx, kp, newval): + pass -vcb = ValpointCallback() -dp.register_valpoint_cb(dx, 'valpoint-1', vcb) -``` + vcb = ValpointCallback() + dp.register_valpoint_cb(dx, 'valpoint-1', vcb) -### release\_daemon +### release_daemon ```python release_daemon(dx) -> None ``` -Releases all memory that has been allocated by init\_daemon() and other functions for the daemon context. The control socket as well as all the worker sockets must be closed by the application (before or after release\_daemon() has been called). +Releases all memory that has been allocated by init_daemon() and other +functions for the daemon context. The control socket as well as all the +worker sockets must be closed by the application (before or after +release_daemon() has been called). Keyword arguments: -* dx -- a daemon context acquired through a call to init\_daemon() +* dx -- a daemon context acquired through a call to init_daemon() -### service\_reply\_proplist +### service_reply_proplist ```python service_reply_proplist(tctx, proplist) -> None ``` -This function must be called with the new property list, immediately prior to returning from the callback, if the stored property list should be updated. If a callback returns without calling service\_reply\_proplist(), the previous property list is retained. To completely delete the property list, call this function with the proplist argument set to an empty list or None. +This function must be called with the new property list, immediately prior +to returning from the callback, if the stored property list should be +updated. If a callback returns without calling service_reply_proplist(), +the previous property list is retained. To completely delete the property +list, call this function with the proplist argument set to an empty list or +None. -The proplist argument should be a list of 2-tuples built up like this: list( (name, value), (name, value), ... ) In a 2-tuple both 'name' and 'value' must be strings. +The proplist argument should be a list of 2-tuples built up like this: + list( (name, value), (name, value), ... ) +In a 2-tuple both 'name' and 'value' must be strings. Keyword arguments: * tctx -- a transaction context * proplist -- a list of properties or None -### set\_daemon\_flags +### set_daemon_flags ```python set_daemon_flags(dx, flags) -> None ``` -Modifies the API behaviour according to the flags ORed into the flags argument. +Modifies the API behaviour according to the flags ORed into the flags +argument. Keyword arguments: -* dx -- a daemon context acquired through a call to init\_daemon() +* dx -- a daemon context acquired through a call to init_daemon() * flags -- the flags to set -### trans\_set\_fd +### trans_set_fd ```python trans_set_fd(tctx, sock) -> None ``` -Associate a worker socket with the transaction, or validation phase. This function must be called in the transaction and validation cb\_init() callbacks. +Associate a worker socket with the transaction, or validation phase. This +function must be called in the transaction and validation cb_init() +callbacks. Keyword arguments: * tctx -- a transaction context * sock -- a previously connected worker socket -A minimal implementation of a transaction cb\_init() callback looks like: +A minimal implementation of a transaction cb_init() callback looks like: -``` -class TransCb(object): - def __init__(self, workersock): - self.workersock = workersock + class TransCb(object): + def __init__(self, workersock): + self.workersock = workersock - def cb_init(self, tctx): - dp.trans_set_fd(tctx, self.workersock) -``` + def cb_init(self, tctx): + dp.trans_set_fd(tctx, self.workersock) -### trans\_seterr +### trans_seterr ```python trans_seterr(tctx, errstr) -> None @@ -1538,45 +1679,49 @@ Keyword arguments: * tctx -- a transaction context * errstr -- an error message string -### trans\_seterr\_extended +### trans_seterr_extended ```python trans_seterr_extended(tctx, code, apptag_ns, apptag_tag, errstr) -> None ``` -This function can be used to provide more structured error information from a transaction or data callback. +This function can be used to provide more structured error information +from a transaction or data callback. Keyword arguments: * tctx -- a transaction context * code -- an error code -* apptag\_ns -- namespace - should be set to 0 -* apptag\_tag -- either 0 or the hash value for a data model node +* apptag_ns -- namespace - should be set to 0 +* apptag_tag -- either 0 or the hash value for a data model node * errstr -- an error message string -### trans\_seterr\_extended\_info +### trans_seterr_extended_info ```python trans_seterr_extended_info(tctx, code, apptag_ns, apptag_tag, error_info, errstr) -> None ``` -This function can be used to provide structured error information in the same way as trans\_seterr\_extended(), and additionally provide contents for the NETCONF element. +This function can be used to provide structured error information in the +same way as trans_seterr_extended(), and additionally provide contents for +the NETCONF element. Keyword arguments: * tctx -- a transaction context * code -- an error code -* apptag\_ns -- namespace - should be set to 0 -* apptag\_tag -- either 0 or the hash value for a data model node -* error\_info -- a list of \_lib.TagValue instances +* apptag_ns -- namespace - should be set to 0 +* apptag_tag -- either 0 or the hash value for a data model node +* error_info -- a list of _lib.TagValue instances * errstr -- an error message string + ## Classes ### _class_ **AuthCtxRef** -This type represents the c-type struct confd\_auth\_ctx. +This type represents the c-type struct confd_auth_ctx. Available attributes: @@ -1595,7 +1740,7 @@ _None_ ### _class_ **AuthorizationCtxRef** -This type represents the c-type struct confd\_authorization\_ctx. +This type represents the c-type struct confd_authorization_ctx. Available attributes: @@ -1610,7 +1755,7 @@ _None_ ### _class_ **DaemonCtxRef** -struct confd\_daemon\_ctx references object +struct confd_daemon_ctx references object Members: @@ -1618,7 +1763,7 @@ _None_ ### _class_ **DbCtxRef** -This type represents the c-type struct confd\_db\_ctx. +This type represents the c-type struct confd_db_ctx. DbCtxRef cannot be directly instantiated from Python. @@ -1634,6 +1779,7 @@ Method: did() -> int ``` +
@@ -1646,6 +1792,7 @@ Method: dx() -> DaemonCtxRef ``` +
@@ -1658,6 +1805,7 @@ Method: lastop() -> int ``` +
@@ -1670,6 +1818,7 @@ Method: qref() -> int ``` +
@@ -1682,18 +1831,19 @@ Method: uinfo() -> _ncs.UserInfo ``` +
### _class_ **ListFilter** -This type represents the c-type struct confd\_list\_filter. +This type represents the c-type struct confd_list_filter. Available attributes: -* type -- filter type, LF\_\* +* type -- filter type, LF_* * expr1 -- OR, AND, NOT expression * expr2 -- OR, AND expression -* op -- operation, CMP\_\* and EXEC\_\* +* op -- operation, CMP_* and EXEC_* * node -- filter tagpath * val -- filter value @@ -1705,12 +1855,12 @@ _None_ ### _class_ **NotificationCtxRef** -This type represents the c-type struct confd\_notification\_ctx. +This type represents the c-type struct confd_notification_ctx. Available attributes: * name -- stream name or snmp notify name (string or None) -* ctx\_name -- for snmp only (string or None) +* ctx_name -- for snmp only (string or None) * fd -- worker socket (int) * dx -- the daemon context (DaemonCtxRef) @@ -1722,17 +1872,19 @@ _None_ ### _class_ **TrItemRef** -This type represents the c-type confd\_tr\_item. +This type represents the c-type confd_tr_item. Available attributes: * callpoint -- the callpoint (string) -* op -- operation, one of C\_SET\_ELEM, C\_CREATE, C\_REMOVE, C\_SET\_CASE, C\_SET\_ATTR or C\_MOVE\_AFTER (int) +* op -- operation, one of C_SET_ELEM, C_CREATE, C_REMOVE, C_SET_CASE, + C_SET_ATTR or C_MOVE_AFTER (int) * hkp -- the keypath (HKeypathRef) * val -- the value (Value or None) -* choice -- the choice, only for C\_SET\_CASE (Value or None) -* attr -- attribute, only for C\_SET\_ATTR (int or None) -* next -- the next TrItemRef object in the linked list or None if no more items are found +* choice -- the choice, only for C_SET_CASE (Value or None) +* attr -- attribute, only for C_SET_ATTR (int or None) +* next -- the next TrItemRef object in the linked list or None if no more + items are found TrItemRef cannot be directly instantiated from Python. @@ -1914,6 +2066,7 @@ MISC_APPLICATION_INTERNAL = 20 MISC_BAD_PERSIST_ID = 16 MISC_CANDIDATE_ABORT_BAD_USID = 17 MISC_CDB_OPER_UNAVAILABLE = 37 +MISC_CONF_LOAD_NOT_ALLOWED = 59 MISC_DATA_MISSING = 44 MISC_EXTERNAL = 22 MISC_EXTERNAL_TIMEOUT = 45 @@ -2094,7 +2247,6 @@ NCS_UNKNOWN_NED_IDS_COMPLIANCE_TEMPLATE = 124 NCS_UNKNOWN_NED_ID_DEVICE_TEMPLATE = 106 NCS_XML_PARSE = 11 NCS_YANGLIB_NO_SCHEMA_FOR_RUNNING = 114 -OPERATION_CASE_EXISTS = 13 PATCH_FLAG_AAA_CHECKED = 8 PATCH_FLAG_BUFFER_DAMPENED = 2 PATCH_FLAG_FILTER = 4 diff --git a/developer-reference/pyapi/_ncs.events.md b/developer-reference/pyapi/_ncs.events.md index 2fc74f74..3ea1b937 100644 --- a/developer-reference/pyapi/_ncs.events.md +++ b/developer-reference/pyapi/_ncs.events.md @@ -1,27 +1,37 @@ -# \_ncs.events Module +# Python _ncs.events Module Low level module for subscribing to NCS event notifications. -This module is used to connect to NCS and subscribe to certain events generated by NCS. The API to receive events from NCS is a socket based API whereby the application connects to NCS and receives events on a socket. See also the Notifications chapter in the User Guide. The program misc/notifications/confd\_notifications.c in the examples collection illustrates subscription and processing for all these events, and can also be used standalone in a development environment to monitor NCS events. +This module is used to connect to NCS and subscribe to certain +events generated by NCS. The API to receive events from NCS is a +socket based API whereby the application connects to NCS and receives +events on a socket. See also the Notifications chapter in the User Guide. +The program misc/notifications/confd_notifications.c in the examples +collection illustrates subscription and processing for all these events, +and can also be used standalone in a development environment to monitor +NCS events. -This documentation should be read together with the [confd\_lib\_events(3)](../../resources/man/confd_lib_events.3.md) man page. +This documentation should be read together with the [confd_lib_events(3)](../../resources/man/confd_lib_events.3.md) man page. ## Functions -### diff\_notification\_done +### diff_notification_done ```python diff_notification_done(sock, tctx) -> None ``` -If the received event was NOTIF\_COMMIT\_DIFF it is important that we call this function when we are done reading the transaction diffs over MAAPI. The transaction is hanging until this function gets called. This function also releases memory associated to the transaction in the library. +If the received event was NOTIF_COMMIT_DIFF it is important that we call +this function when we are done reading the transaction diffs over MAAPI. +The transaction is hanging until this function gets called. This function +also releases memory associated to the transaction in the library. Keyword arguments: * sock -- a previously connected notification socket * tctx -- a transaction context -### notifications\_connect +### notifications_connect ```python notifications_connect(sock, mask, ip, port, path) -> None @@ -33,249 +43,271 @@ Keyword arguments: * sock -- a Python socket instance * mask -- a bitmask of one or several notification type values -* ip -- the ip address if socket is AF\_INET (optional) -* port -- the port if socket is AF\_INET (optional) -* path -- a filename if socket is AF\_UNIX (optional). +* ip -- the ip address if socket is AF_INET (optional) +* port -- the port if socket is AF_INET (optional) +* path -- a filename if socket is AF_UNIX (optional). -### notifications\_connect2 +### notifications_connect2 ```python notifications_connect2(sock, mask, data, ip, port, path) -> None ``` -This variant of notifications\_connect is required if we wish to subscribe to NOTIF\_HEARTBEAT, NOTIF\_HEALTH\_CHECK, or NOTIF\_STREAM\_EVENT events. +This variant of notifications_connect is required if we wish to subscribe +to NOTIF_HEARTBEAT, NOTIF_HEALTH_CHECK, or NOTIF_STREAM_EVENT events. Keyword arguments: * sock -- a Python socket instance * mask -- a bitmask of one or several notification type values -* data -- a \_events.NotificationsData instance -* ip -- the ip address if socket is AF\_INET (optional) -* port -- the port if socket is AF\_INET (optional) -* path -- a filename if socket is AF\_UNIX (optional) +* data -- a _events.NotificationsData instance +* ip -- the ip address if socket is AF_INET (optional) +* port -- the port if socket is AF_INET (optional) +* path -- a filename if socket is AF_UNIX (optional) -### read\_notification +### read_notification ```python read_notification(sock) -> dict ``` -The application is responsible for polling the notification socket. Once data is available to be read on the socket the application must call read\_notification() to read the data from the socket. On success a dictionary containing notification information will be returned (see below). +The application is responsible for polling the notification socket. Once +data is available to be read on the socket the application must call +read_notification() to read the data from the socket. On success a +dictionary containing notification information will be returned (see below). Keyword arguments: * sock -- a previously connected notification socket -On success the returned dict will contain information corresponding to the c struct confd\_notification. The notification type is accessible through the 'type' key. The remaining information will be different depending on which type of notification this is (described below). +On success the returned dict will contain information corresponding to the +c struct confd_notification. The notification type is accessible through +the 'type' key. The remaining information will be different depending on +which type of notification this is (described below). -Keys for type NOTIF\_AUDIT (struct confd\_audit\_notification): +Keys for type NOTIF_AUDIT (struct confd_audit_notification): -* logno -* user -* msg -* usid +* logno +* user +* msg +* usid -Keys for type NOTIF\_DAEMON, NOTIF\_NETCONF, NOTIF\_DEVEL, NOTIF\_JSONRPC, NOTIF\_WEBUI, or NOTIF\_TAKEOVER\_SYSLOG (struct confd\_syslog\_notification): +Keys for type NOTIF_DAEMON, NOTIF_NETCONF, NOTIF_DEVEL, NOTIF_JSONRPC, +NOTIF_WEBUI, or NOTIF_TAKEOVER_SYSLOG (struct confd_syslog_notification): -* prio -* logno -* msg +* prio +* logno +* msg -Keys for type NOTIF\_COMMIT\_SIMPLE (struct confd\_commit\_notification): +Keys for type NOTIF_COMMIT_SIMPLE (struct confd_commit_notification): -* database -* diff\_available -* flags -* uinfo +* database +* diff_available +* flags +* uinfo -Keys for type NOTIF\_COMMIT\_DIFF (struct confd\_commit\_diff\_notification): - -* database -* flags -* uinfo -* tctx -* label (optional) -* comment (optional) - -Keys for type NOTIF\_USER\_SESSION (struct confd\_user\_sess\_notification): - -* type -* uinfo -* database - -Keys for type NOTIF\_HA\_INFO (struct confd\_ha\_notification): - -* type (1) -* noprimary - if (1) is HA\_INFO\_NOPRIMARY -* secondary\_died - if (1) is HA\_INFO\_SECONDARY\_DIED (see below) -* secondary\_arrived - if (1) is HA\_INFO\_SECONDARY\_ARRIVED (see below) -* cdb\_initialized\_by\_copy - if (1) is HA\_INFO\_SECONDARY\_INITIALIZED -* besecondary\_result - if (1) is HA\_INFO\_BESECONDARY\_RESULT - -If secondary\_died or secondary\_arrived is present they will in turn contain a dictionary with the following keys: - -* nodeid -* af (1) -* ip4 - if (1) is AF\_INET -* ip6 - if (1) is AF\_INET6 -* str - if (1) if AF\_UNSPEC - -Keys for type NOTIF\_SUBAGENT\_INFO (struct confd\_subagent\_notification): - -* type -* name - -Keys for type NOTIF\_COMMIT\_FAILED (struct confd\_commit\_failed\_notification): - -* provider (1) -* dbname -* port - if (1) is DP\_NETCONF -* af (2) - if (1) is DP\_NETCONF -* ip4 - if (2) is AF\_INET -* ip6 - if (2) is AF\_INET6 -* daemon\_name - if (1) is DP\_EXTERNAL - -Keys for type NOTIF\_SNMPA (struct confd\_snmpa\_notification): - -* pdu\_type (1) -* request\_id -* error\_status -* error\_index -* port -* af (2) -* ip4 - if (3) is AF\_INET -* ip6 - if (3) is AF\_INET6 -* vb (optional) -* generic\_trap - if (1) is SNMPA\_PDU\_V1TRAP -* specific\_trap - if (1) is SNMPA\_PDU\_V1TRAP -* time\_stamp - if (1) is SNMPA\_PDU\_V1TRAP -* enterprise - if (1) is SNMPA\_PDU\_V1TRAP (optional) - -Keys for type NOTIF\_FORWARD\_INFO (struct confd\_forward\_notification): - -* type -* target -* uinfo - -Keys for type NOTIF\_CONFIRMED\_COMMIT (struct confd\_confirmed\_commit\_notification): - -* type -* timeout -* uinfo - -Keys for type NOTIF\_UPGRADE\_EVENT (struct confd\_upgrade\_notification): - -* event - -Keys for type NOTIF\_COMPACTION (struct confd\_compaction\_notification): - -* dbfile (1) - name of the compacted file -* type - automatic or manual -* fsize\_start - size at start (bytes) -* fsize\_end - size at end (bytes) -* fsize\_last - size at end of last compaction (bytes) -* time\_start - start time (microseconds) -* duration - duration (microseconds) -* ntrans - number of transactions written to (1) since last compaction - -Keys for type NOTIF\_COMMIT\_PROGRESS and NOTIF\_PROGRESS (struct confd\_progress\_notification): - -* type (1) -* timestamp -* duration if (1) is CONFD\_PROGRESS\_STOP -* trace\_id (optional) -* span\_id -* parent\_span\_id (optional) -* usid -* tid -* datastore -* context (optional) -* subsystem (optional) -* msg (optional) -* annotation (optional) -* num\_attributes -* attributes (optional) -* num\_links -* links (optional) - -Keys for type NOTIF\_STREAM\_EVENT (struct confd\_stream\_notification): - -* type (1) -* error - if (1) is STREAM\_REPLAY\_FAILED -* event\_time - if (1) is STREAM\_NOTIFICATION\_EVENT -* values - if (1) is STREAM\_NOTIFICATION\_EVENT - -Keys for type NOTIF\_CQ\_PROGRESS (struct ncs\_cq\_progress\_notification): - -* type -* timestamp -* cq\_id -* cq\_tag -* label -* completed\_devices (optional) -* transient\_devices (optional) -* failed\_devices (optional) -* failed\_reasons - if failed\_devices is present -* completed\_services (optional) -* completed\_services\_completed\_devices - if completed\_services is present -* failed\_services (optional) -* failed\_services\_completed\_devices - if failed\_services is present -* failed\_services\_failed\_devices - if failed\_services is present - -Keys for type NOTIF\_CALL\_HOME\_INFO (struct ncs\_call\_home\_notification): - -* type (1) -* device - if (1) is CALL\_HOME\_DEVICE\_CONNECTED or CALL\_HOME\_DEVICE\_DISCONNECTED -* af (2) -* ip4 - if (2) is AF\_INET -* ip6 - if (2) is AF\_INET6 -* port -* ssh\_host\_key -* ssh\_key\_alg - -### sync\_audit\_network\_notification +Keys for type NOTIF_COMMIT_DIFF (struct confd_commit_diff_notification): + +* database +* flags +* uinfo +* tctx +* label (optional) +* comment (optional) + +Keys for type NOTIF_USER_SESSION (struct confd_user_sess_notification): + +* type +* uinfo +* database + +Keys for type NOTIF_HA_INFO (struct confd_ha_notification): + +* type (1) +* noprimary - if (1) is HA_INFO_NOPRIMARY +* secondary_died - if (1) is HA_INFO_SECONDARY_DIED (see below) +* secondary_arrived - if (1) is HA_INFO_SECONDARY_ARRIVED (see below) +* cdb_initialized_by_copy - if (1) is HA_INFO_SECONDARY_INITIALIZED +* besecondary_result - if (1) is HA_INFO_BESECONDARY_RESULT + +If secondary_died or secondary_arrived is present they will in turn contain +a dictionary with the following keys: + +* nodeid +* af (1) +* ip4 - if (1) is AF_INET +* ip6 - if (1) is AF_INET6 +* str - if (1) if AF_UNSPEC + +Keys for type NOTIF_SUBAGENT_INFO (struct confd_subagent_notification): + +* type +* name + +Keys for type NOTIF_COMMIT_FAILED (struct confd_commit_failed_notification): + +* provider (1) +* dbname +* port - if (1) is DP_NETCONF +* af (2) - if (1) is DP_NETCONF +* ip4 - if (2) is AF_INET +* ip6 - if (2) is AF_INET6 +* daemon_name - if (1) is DP_EXTERNAL + +Keys for type NOTIF_SNMPA (struct confd_snmpa_notification): + +* pdu_type (1) +* request_id +* error_status +* error_index +* port +* af (2) +* ip4 - if (3) is AF_INET +* ip6 - if (3) is AF_INET6 +* vb (optional) +* generic_trap - if (1) is SNMPA_PDU_V1TRAP +* specific_trap - if (1) is SNMPA_PDU_V1TRAP +* time_stamp - if (1) is SNMPA_PDU_V1TRAP +* enterprise - if (1) is SNMPA_PDU_V1TRAP (optional) + +Keys for type NOTIF_FORWARD_INFO (struct confd_forward_notification): + +* type +* target +* uinfo + +Keys for type NOTIF_CONFIRMED_COMMIT + (struct confd_confirmed_commit_notification): + +* type +* timeout +* uinfo + +Keys for type NOTIF_UPGRADE_EVENT (struct confd_upgrade_notification): + +* event + +Keys for type NOTIF_COMPACTION (struct confd_compaction_notification): + +* dbfile (1) - name of the compacted file +* type - automatic or manual +* fsize_start - size at start (bytes) +* fsize_end - size at end (bytes) +* fsize_last - size at end of last compaction (bytes) +* time_start - start time (microseconds) +* duration - duration (microseconds) +* ntrans - number of transactions written to (1) since last compaction + +Keys for type NOTIF_COMMIT_PROGRESS and NOTIF_PROGRESS + (struct confd_progress_notification): + +* type (1) +* timestamp +* duration if (1) is CONFD_PROGRESS_STOP +* trace_id (optional) +* span_id +* parent_span_id (optional) +* usid +* tid +* datastore +* context (optional) +* subsystem (optional) +* msg (optional) +* annotation (optional) +* num_attributes +* attributes (optional) +* num_links +* links (optional) + +Keys for type NOTIF_STREAM_EVENT (struct confd_stream_notification): + +* type (1) +* error - if (1) is STREAM_REPLAY_FAILED +* event_time - if (1) is STREAM_NOTIFICATION_EVENT +* values - if (1) is STREAM_NOTIFICATION_EVENT + +Keys for type NOTIF_CQ_PROGRESS (struct ncs_cq_progress_notification): + +* type +* timestamp +* cq_id +* cq_tag +* label +* completed_devices (optional) +* transient_devices (optional) +* failed_devices (optional) +* failed_reasons - if failed_devices is present +* completed_services (optional) +* completed_services_completed_devices - if completed_services is present +* failed_services (optional) +* failed_services_completed_devices - if failed_services is present +* failed_services_failed_devices - if failed_services is present + +Keys for type NOTIF_CALL_HOME_INFO (struct ncs_call_home_notification): + +* type (1) +* device - if (1) is CALL_HOME_DEVICE_CONNECTED or + CALL_HOME_DEVICE_DISCONNECTED +* af (2) +* ip4 - if (2) is AF_INET +* ip6 - if (2) is AF_INET6 +* port +* ssh_host_key +* ssh_key_alg + +### sync_audit_network_notification ```python sync_audit_network_notification(sock, usid) -> None ``` -If the received event was NOTIF\_AUDIT\_NETWORK, and we are subscribing to notifications with the flag NOTIF\_AUDIT\_NETWORK\_SYNC, this function must be called when we are done processing the notification. The user session is hanging until this function gets called. +If the received event was NOTIF_AUDIT_NETWORK, and we are subscribing to +notifications with the flag NOTIF_AUDIT_NETWORK_SYNC, this function must be +called when we are done processing the notification. The user session is +hanging until this function gets called. Keyword arguments: * sock -- a previously connected notification socket * usid -- the user session id -### sync\_audit\_notification +### sync_audit_notification ```python sync_audit_notification(sock, usid) -> None ``` -If the received event was NOTIF\_AUDIT, and we are subscribing to notifications with the flag NOTIF\_AUDIT\_SYNC, this function must be called when we are done processing the notification. The user session is hanging until this function gets called. +If the received event was NOTIF_AUDIT, and we are subscribing to +notifications with the flag NOTIF_AUDIT_SYNC, this function must be called +when we are done processing the notification. The user session is hanging +until this function gets called. Keyword arguments: * sock -- a previously connected notification socket * usid -- the user session id -### sync\_ha\_notification +### sync_ha_notification ```python sync_ha_notification(sock) -> None ``` -If the received event was NOTIF\_HA\_INFO, and we are subscribing to notifications with the flag NOTIF\_HA\_INFO\_SYNC, this function must be called when we are done processing the notification. All HA processing is blocked until this function gets called. +If the received event was NOTIF_HA_INFO, and we are subscribing to +notifications with the flag NOTIF_HA_INFO_SYNC, this function must be +called when we are done processing the notification. All HA processing is +blocked until this function gets called. Keyword arguments: * sock -- a previously connected notification socket + ## Classes ### _class_ **Notification** -This is a placeholder for the c-type struct confd\_notification. +This is a placeholder for the c-type struct confd_notification. Notification cannot be directly instantiated from Python. @@ -285,20 +317,22 @@ _None_ ### _class_ **NotificationsData** -This type represents the c-type struct confd\_notifications\_data. +This type represents the c-type struct confd_notifications_data. The contructor for this type has the following signature: -NotificationsData(hearbeat\_interval, health\_check\_interval, stream\_name, start\_time, stop\_time, xpath\_filter, usid, verbosity) -> object +NotificationsData(hearbeat_interval, health_check_interval, stream_name, + start_time, stop_time, xpath_filter, usid, + verbosity) -> object Keyword arguments: -* heartbeat\_interval -- time in milli seconds (int) -* health\_check\_interval -- time in milli seconds (int) -* stream\_name -- name of the notification stream (string) -* start\_time -- the start time (Value) -* stop\_time -- the stop time (Value) -* xpath\_filter -- XPath filter for the stream (string) - optional +* heartbeat_interval -- time in milli seconds (int) +* health_check_interval -- time in milli seconds (int) +* stream_name -- name of the notification stream (string) +* start_time -- the start time (Value) +* stop_time -- the stop time (Value) +* xpath_filter -- XPath filter for the stream (string) - optional * usid -- user session id for AAA restriction (int) - optional * verbosity -- progress verbosity level (int) - optional diff --git a/developer-reference/pyapi/_ncs.ha.md b/developer-reference/pyapi/_ncs.ha.md index aede552b..4d4d60c4 100644 --- a/developer-reference/pyapi/_ncs.ha.md +++ b/developer-reference/pyapi/_ncs.ha.md @@ -1,10 +1,14 @@ -# \_ncs.ha Module +# Python _ncs.ha Module Low level module for connecting to NCS HA subsystem. -This module is used to connect to the NCS High Availability (HA) subsystem. NCS can replicate the configuration data on several nodes in a cluster. The purpose of this API is to manage the HA functionality. The details on usage of the HA API are described in the chapter High availability in the User Guide. +This module is used to connect to the NCS High Availability (HA) +subsystem. NCS can replicate the configuration data on several nodes +in a cluster. The purpose of this API is to manage the HA +functionality. The details on usage of the HA API are described in the +chapter High availability in the User Guide. -This documentation should be read together with the [confd\_lib\_ha(3)](../../resources/man/confd_lib_ha.3.md) man page. +This documentation should be read together with the [confd_lib_ha(3)](../../resources/man/confd_lib_ha.3.md) man page. ## Functions @@ -14,7 +18,8 @@ This documentation should be read together with the [confd\_lib\_ha(3)](../../re bemaster(sock, mynodeid) -> None ``` -This function is deprecated and will be removed. Use beprimary() instead. +This function is deprecated and will be removed. +Use beprimary() instead. ### benone @@ -22,7 +27,8 @@ This function is deprecated and will be removed. Use beprimary() instead. benone(sock) -> None ``` -Instruct a node to resume the initial state, i.e. neither become primary nor secondary. +Instruct a node to resume the initial state, i.e. neither become primary +nor secondary. Keyword arguments: @@ -47,7 +53,8 @@ Keyword arguments: berelay(sock) -> None ``` -Instruct an established HA secondary node to be a relay for other secondary nodes. +Instruct an established HA secondary node to be a relay for other +secondary nodes. Keyword arguments: @@ -59,15 +66,22 @@ Keyword arguments: besecondary(sock, mynodeid, primary_id, primary_ip, waitreply) -> None ``` -Instruct a NCS HA node to be a secondary node with a named primary node. If waitreply is True the function is synchronous and it will hang until the node has initialized its CDB database. This may mean that the CDB database is copied in its entirety from the primary node. If False, we do not wait for the reply, but it is possible to use a notifications socket and get notified asynchronously via a HA\_INFO\_BESECONDARY\_RESULT notification. In both cases, it is also possible to use a notifications socket and get notified asynchronously when CDB at the secondary node is initialized. +Instruct a NCS HA node to be a secondary node with a named primary node. +If waitreply is True the function is synchronous and it will hang until the +node has initialized its CDB database. This may mean that the CDB database +is copied in its entirety from the primary node. If False, we do not wait +for the reply, but it is possible to use a notifications socket and get +notified asynchronously via a HA_INFO_BESECONDARY_RESULT notification. +In both cases, it is also possible to use a notifications socket and get +notified asynchronously when CDB at the secondary node is initialized. Keyword arguments: -* sock -- a previously connected HA socket -* mynodeid -- name of this secondary node (Value or string) -* primary\_id -- name of the primary node (Value or string) -* primary\_ip -- ip address of the primary node -* waitreply -- synchronous or not (bool) +* sock -- a previously connected HA socket +* mynodeid -- name of this secondary node (Value or string) +* primary_id -- name of the primary node (Value or string) +* primary_ip -- ip address of the primary node +* waitreply -- synchronous or not (bool) ### beslave @@ -75,7 +89,8 @@ Keyword arguments: beslave(sock, mynodeid, primary_id, primary_ip, waitreply) -> None ``` -This function is deprecated and will be removed. Use besecondary() instead. +This function is deprecated and will be removed. +Use besecondary() instead. ### connect @@ -83,36 +98,42 @@ This function is deprecated and will be removed. Use besecondary() instead. connect(sock, token, ip, port, pstr) -> None ``` -Connect a HA socket which can be used to control a NCS HA node. The token is a secret string that must be shared by all participants in the cluster. There can only be one HA socket towards NCS. A new call to ha\_connect() makes NCS close the previous connection and reset the token to the new value. +Connect a HA socket which can be used to control a NCS HA node. The token +is a secret string that must be shared by all participants in the cluster. +There can only be one HA socket towards NCS. A new call to +ha_connect() makes NCS close the previous connection and reset the token to +the new value. Keyword arguments: * sock -- a Python socket instance * token -- secret string -* ip -- the ip address if socket is AF\_INET or AF\_INET6 (optional) -* port -- the port if socket is AF\_INET or AF\_INET6 (optional) -* pstr -- a filename if socket is AF\_UNIX (optional). +* ip -- the ip address if socket is AF_INET or AF_INET6 (optional) +* port -- the port if socket is AF_INET or AF_INET6 (optional) +* pstr -- a filename if socket is AF_UNIX (optional). -### secondary\_dead +### secondary_dead ```python secondary_dead(sock, nodeid) -> None ``` -This function must be used by the application to inform NCS HA subsystem that another node which is possibly connected to NCS is dead. +This function must be used by the application to inform NCS HA subsystem +that another node which is possibly connected to NCS is dead. Keyword arguments: * sock -- a previously connected HA socket * nodeid -- name of the node (Value or string) -### slave\_dead +### slave_dead ```python slave_dead(sock, nodeid) -> None ``` -This function is deprecated and will be removed. Use secondary\_dead() instead. +This function is deprecated and will be removed. +Use secondary_dead() instead. ### status @@ -122,12 +143,15 @@ status(sock) -> None Query a ConfD HA node for its status. -Returns a 2-tuple of the HA status of the node in the format (State,\[list\_of\_nodes]) where 'list\_of\_nodes' is the primary/secondary(s) connected with node. +Returns a 2-tuple of the HA status of the node in the format +(State,[list_of_nodes]) where 'list_of_nodes' is the primary/secondary(s) +connected with node. Keyword arguments: * sock -- a previously connected HA socket + ## Predefined Values ```python diff --git a/developer-reference/pyapi/_ncs.maapi.md b/developer-reference/pyapi/_ncs.maapi.md index 96264589..96b321e3 100644 --- a/developer-reference/pyapi/_ncs.maapi.md +++ b/developer-reference/pyapi/_ncs.maapi.md @@ -1,14 +1,20 @@ -# \_ncs.maapi Module +# Python _ncs.maapi Module -Low level module for connecting to NCS with a read/write interface inside transactions. +Low level module for connecting to NCS with a read/write interface +inside transactions. -This module is used to connect to the NCS transaction manager. The API described here has several purposes. We can use MAAPI when we wish to implement our own proprietary management agent. We also use MAAPI to attach to already existing NCS transactions, for example when we wish to implement semantic validation of configuration data in Python, and also when we wish to implement CLI wizards in Python. +This module is used to connect to the NCS transaction manager. +The API described here has several purposes. We can use MAAPI when we wish +to implement our own proprietary management agent. +We also use MAAPI to attach to already existing NCS transactions, for +example when we wish to implement semantic validation of configuration +data in Python, and also when we wish to implement CLI wizards in Python. -This documentation should be read together with the [confd\_lib\_maapi(3)](../../resources/man/confd_lib_maapi.3.md) man page. +This documentation should be read together with the [confd_lib_maapi(3)](../../resources/man/confd_lib_maapi.3.md) man page. ## Functions -### aaa\_reload +### aaa_reload ```python aaa_reload(sock, synchronous) -> None @@ -16,14 +22,18 @@ aaa_reload(sock, synchronous) -> None Start a reload of aaa from external data provider. -Used by external data provider to notify that there is a change to the AAA data. Calling the function with the argument 'synchronous' set to 1 or True means that the call will block until the loading is completed. +Used by external data provider to notify that there is a change to the AAA +data. Calling the function with the argument 'synchronous' set to 1 or True +means that the call will block until the loading is completed. Keyword arguments: * sock -- a python socket instance -* synchronous -- if 1, will wait for the loading complete and return when the loading is complete; if 0, will only initiate the loading of AAA data and return immediately +* synchronous -- if 1, will wait for the loading complete and return when + the loading is complete; if 0, will only initiate the loading of AAA + data and return immediately -### aaa\_reload\_path +### aaa_reload_path ```python aaa_reload_path(sock, synchronous, path) -> None @@ -31,15 +41,18 @@ aaa_reload_path(sock, synchronous, path) -> None Start a reload of aaa from external data provider. -A variant of \_maapi\_aaa\_reload() that causes only the AAA subtree given by path to be loaded. +A variant of _maapi_aaa_reload() that causes only the AAA subtree given by +path to be loaded. Keyword arguments: * sock -- a python socket instance -* synchronous -- if 1, will wait for the loading complete and return when the loading is complete; if 0, will only initiate the loading of AAA data and return immediately +* synchronous -- if 1, will wait for the loading complete and return when + the loading is complete; if 0, will only initiate the loading of AAA + data and return immediately * path -- the subtree to be loaded -### abort\_trans +### abort_trans ```python abort_trans(sock, thandle) -> None @@ -52,7 +65,7 @@ Keyword arguments: * sock -- a python socket instance * thandle -- transaction handle -### abort\_upgrade +### abort_upgrade ```python abort_upgrade(sock) -> None @@ -66,13 +79,15 @@ Keyword arguments: * sock -- a python socket instance -### apply\_template +### apply_template ```python apply_template(sock, thandle, template, variables, flags, rootpath) -> None ``` -Apply a template that has been loaded into NCS. The template parameter gives the name of the template. This is NOT a FASTMAP function, for that use shared\_ncs\_apply\_template instead. +Apply a template that has been loaded into NCS. The template parameter gives +the name of the template. This is NOT a FASTMAP function, for that use +shared_ncs_apply_template instead. Keyword arguments: @@ -83,7 +98,7 @@ Keyword arguments: * flags -- should be 0 * rootpath -- in what context to apply the template -### apply\_trans +### apply_trans ```python apply_trans(sock, thandle, keepopen) -> None @@ -91,7 +106,10 @@ apply_trans(sock, thandle, keepopen) -> None Apply a transaction. -Validates, prepares and eventually commits or aborts the transaction. If the validation fails and the 'keep\_open' argument is set to 1 or True, the transaction is left open and the developer can react upon the validation errors. +Validates, prepares and eventually commits or aborts the transaction. If +the validation fails and the 'keep_open' argument is set to 1 or True, the +transaction is left open and the developer can react upon the validation +errors. Keyword arguments: @@ -99,13 +117,13 @@ Keyword arguments: * thandle -- transaction handle * keepopen -- if true, transaction is not discarded if validation fails -### apply\_trans\_flags +### apply_trans_flags ```python apply_trans_flags(sock, thandle, keepopen, flags) -> None ``` -A variant of apply\_trans() that takes an additional 'flags' argument. +A variant of apply_trans() that takes an additional 'flags' argument. Keyword arguments: @@ -114,13 +132,13 @@ Keyword arguments: * keepopen -- if true, transaction is not discarded if validation fails * flags -- flags to set in the transaction -### apply\_trans\_params +### apply_trans_params ```python apply_trans_params(sock, thandle, keepopen, params) -> list ``` -A variant of apply\_trans() that takes commit parameters in form of a list ofTagValue objects and returns a list of TagValue objects depending on theparameters passed in. +A variant of apply_trans() that takes commit parameters in form of a list ofTagValue objects and returns a list of TagValue objects depending on theparameters passed in. Keyword arguments: @@ -140,7 +158,7 @@ Attach to a existing transaction. Keyword arguments: * sock -- a python socket instance -* hashed\_ns -- the namespace to use +* hashed_ns -- the namespace to use * ctx -- transaction context ### attach2 @@ -149,22 +167,24 @@ Keyword arguments: attach2(sock, hashed_ns, usid, thandle) -> None ``` -Used when there is no transaction context beforehand, to attach to a existing transaction. +Used when there is no transaction context beforehand, to attach to a +existing transaction. Keyword arguments: * sock -- a python socket instance -* hashed\_ns -- the namespace to use +* hashed_ns -- the namespace to use * usid -- user session id, can be set to 0 to use the owner of the transaction * thandle -- transaction handle -### attach\_init +### attach_init ```python attach_init(sock) -> int ``` -Attach the \_MAAPI socket to the special transaction available during phase0. Returns the thandle as an integer. +Attach the _MAAPI socket to the special transaction available during phase0. +Returns the thandle as an integer. Keyword arguments: @@ -176,7 +196,13 @@ Keyword arguments: authenticate(sock, user, password, n) -> tuple ``` -Authenticate a user session. Use the 'n' to get a list of n-1 groups that the user is a member of. Use n=1 if the function is used in a context where the group names are not needed. Returns 1 if accepted without groups. If the authentication failed or was accepted a tuple with first element status code, 0 for rejection and 1 for accepted is returned. The second element either contains the reason for the rejection as a string OR a list groupnames. +Authenticate a user session. Use the 'n' to get a list of n-1 groups that +the user is a member of. Use n=1 if the function is used in a context +where the group names are not needed. Returns 1 if accepted without groups. +If the authentication failed or was accepted a tuple with first element +status code, 0 for rejection and 1 for accepted is returned. The second +element either contains the reason for the rejection as a string OR a list +groupnames. Keyword arguments: @@ -191,18 +217,23 @@ Keyword arguments: authenticate2(sock, user, password, src_addr, src_port, context, prot, n) -> tuple ``` -This function does the same thing as maapi.authenticate(), but allows for passing of the additional parameters src\_addr, src\_port, context, and prot, which otherwise are passed only to maapi\_start\_user\_session()/ maapi\_start\_user\_session2(). The parameters are passed on to an external authentication executable. Keyword arguments: +This function does the same thing as maapi.authenticate(), but allows for +passing of the additional parameters src_addr, src_port, context, and prot, +which otherwise are passed only to maapi_start_user_session()/ +maapi_start_user_session2(). The parameters are passed on to an external +authentication executable. +Keyword arguments: * sock -- a python socket instance * user -- username * pass -- password -* src\_addr -- ip address -* src\_port -- port number +* src_addr -- ip address +* src_port -- port number * context -- context for the session * prot -- the protocol used by the client for connecting * n -- number of groups to return -### candidate\_abort\_commit +### candidate_abort_commit ```python candidate_abort_commit(sock) -> None @@ -214,20 +245,20 @@ Keyword arguments: * sock -- a python socket instance -### candidate\_abort\_commit\_persistent +### candidate_abort_commit_persistent ```python candidate_abort_commit_persistent(sock, persist_id) -> None ``` -Cancel an ongoing confirmed commit with the cookie given by persist\_id. +Cancel an ongoing confirmed commit with the cookie given by persist_id. Keyword arguments: * sock -- a python socket instance -* persist\_id -- gives the cookie for an already ongoing persistent confirmed commit +* persist_id -- gives the cookie for an already ongoing persistent confirmed commit -### candidate\_commit +### candidate_commit ```python candidate_commit(sock) -> None @@ -239,73 +270,83 @@ Keyword arguments: * sock -- a python socket instance -### candidate\_commit\_info +### candidate_commit_info ```python candidate_commit_info(sock, persist_id, label, comment) -> None ``` -Commit the candidate to running, or confirm an ongoing confirmed commit, and set the Label and/or Comment that is stored in the rollback file when the candidate is committed to running. +Commit the candidate to running, or confirm an ongoing confirmed commit, +and set the Label and/or Comment that is stored in the rollback file when +the candidate is committed to running. Note: - -> To ensure the Label and/or Comment are stored in the rollback file in all cases when doing a confirmed commit, they must be given with both, the confirmed commit (using maapi\_candidate\_confirmed\_commit\_info()) and the confirming commit (using this function). +> To ensure the Label and/or Comment are stored in the rollback file in +> all cases when doing a confirmed commit, they must be given with both, +> the confirmed commit (using maapi_candidate_confirmed_commit_info()) +> and the confirming commit (using this function). Keyword arguments: * sock -- a python socket instance -* persist\_id -- gives the cookie for an already ongoing persistent confirmed commit +* persist_id -- gives the cookie for an already ongoing persistent confirmed commit * label -- the Label * comment -- the Comment -### candidate\_commit\_persistent +### candidate_commit_persistent ```python candidate_commit_persistent(sock, persist_id) -> None ``` -Confirm an ongoing persistent commit with the cookie given by persist\_id. +Confirm an ongoing persistent commit with the cookie given by persist_id. Keyword arguments: * sock -- a python socket instance -* persist\_id -- gives the cookie for an already ongoing persistent confirmed commit +* persist_id -- gives the cookie for an already ongoing persistent confirmed commit -### candidate\_confirmed\_commit +### candidate_confirmed_commit ```python candidate_confirmed_commit(sock, timeoutsecs) -> None ``` -This function also copies the candidate into running. However if a call to maapi\_candidate\_commit() is not done within timeoutsecs an automatic rollback will occur. +This function also copies the candidate into running. However if a call to +maapi_candidate_commit() is not done within timeoutsecs an automatic +rollback will occur. Keyword arguments: * sock -- a python socket instance * timeoutsecs -- timeout in seconds -### candidate\_confirmed\_commit\_info +### candidate_confirmed_commit_info ```python candidate_confirmed_commit_info(sock, timeoutsecs, persist, persist_id, label, comment) -> None ``` -Like candidate\_confirmed\_commit\_persistent, but also allows for setting the Label and/or Comment that is stored in the rollback file when the candidate is committed to running. +Like candidate_confirmed_commit_persistent, but also allows for setting the +Label and/or Comment that is stored in the rollback file when the candidate +is committed to running. Note: - -> To ensure the Label and/or Comment are stored in the rollback file in all cases when doing a confirmed commit, they must be given with both, the confirmed commit (using this function) and the confirming commit (using candidate\_commit\_info()). +> To ensure the Label and/or Comment are stored in the rollback file in +> all cases when doing a confirmed commit, they must be given with both, +> the confirmed commit (using this function) and the confirming commit +> (using candidate_commit_info()). Keyword arguments: * sock -- a python socket instance * timeoutsecs -- timeout in seconds * persist -- sets the cookie for the persistent confirmed commit -* persist\_id -- gives the cookie for an already ongoing persistent confirmed commit +* persist_id -- gives the cookie for an already ongoing persistent confirmed commit * label -- the Label * comment -- the Comment -### candidate\_confirmed\_commit\_persistent +### candidate_confirmed_commit_persistent ```python candidate_confirmed_commit_persistent(sock, timeoutsecs, persist, persist_id) -> None @@ -318,9 +359,9 @@ Keyword arguments: * sock -- a python socket instance * timeoutsecs -- timeout in seconds * persist -- sets the cookie for the persistent confirmed commit -* persist\_id -- gives the cookie for an already ongoing persistent confirmed commit +* persist_id -- gives the cookie for an already ongoing persistent confirmed commit -### candidate\_reset +### candidate_reset ```python candidate_reset(sock) -> None @@ -332,7 +373,7 @@ Keyword arguments: * sock -- a python socket instance -### candidate\_validate +### candidate_validate ```python candidate_validate(sock) -> None @@ -358,7 +399,7 @@ Keyword arguments: * thandle -- transaction handle * path -- position to change to -### clear\_opcache +### clear_opcache ```python clear_opcache(sock, path) -> None @@ -371,7 +412,7 @@ Keyword arguments: * sock -- a python socket instance * path -- the path to the subtree to clear -### cli\_accounting +### cli_accounting ```python cli_accounting(sock, user, usid, cmdstr) -> None @@ -385,7 +426,7 @@ Keyword arguments: * user -- user to generate the entry for * thandle -- transaction handle -### cli\_cmd +### cli_cmd ```python cli_cmd(sock, usess, buf) -> None @@ -399,13 +440,15 @@ Keyword arguments: * usess -- user session * buf -- string to write -### cli\_cmd2 +### cli_cmd2 ```python cli_cmd2(sock, usess, buf, flags) -> None ``` -Execute CLI command in a ongoing CLI session. With flags: CMD\_NO\_FULLPATH - Do not perform the fullpath check on show commands. CMD\_NO\_HIDDEN - Allows execution of hidden CLI commands. +Execute CLI command in a ongoing CLI session. With flags: +CMD_NO_FULLPATH - Do not perform the fullpath check on show commands. +CMD_NO_HIDDEN - Allows execution of hidden CLI commands. Keyword arguments: @@ -414,7 +457,7 @@ Keyword arguments: * buf -- string to write * flags -- as above -### cli\_cmd3 +### cli_cmd3 ```python cli_cmd3(sock, usess, buf, flags, unhide) -> None @@ -428,9 +471,10 @@ Keyword arguments: * usess -- user session * buf -- string to write * flags -- as above -* unhide -- The unhide parameter is used for passing a hide group which is unhidden during the execution of the command. +* unhide -- The unhide parameter is used for passing a hide group which is + unhidden during the execution of the command. -### cli\_cmd4 +### cli_cmd4 ```python cli_cmd4(sock, usess, buf, flags, unhide) -> None @@ -444,15 +488,17 @@ Keyword arguments: * usess -- user session * buf -- string to write * flags -- as above -* unhide -- The unhide parameter is used for passing a hide group which is unhidden during the execution of the command. +* unhide -- The unhide parameter is used for passing a hide group which is + unhidden during the execution of the command. -### cli\_cmd\_to\_path +### cli_cmd_to_path ```python cli_cmd_to_path(sock, line, nsize, psize) -> tuple ``` -Returns string of the C/I namespaced CLI path that can be associated with the given command. Returns a tuple ns and path. +Returns string of the C/I namespaced CLI path that can be associated with +the given command. Returns a tuple ns and path. Keyword arguments: @@ -461,13 +507,15 @@ Keyword arguments: * nsize -- limit length of namespace * psize -- limit length of path -### cli\_cmd\_to\_path2 +### cli_cmd_to_path2 ```python cli_cmd_to_path2(sock, thandle, line, nsize, psize) -> tuple ``` -Returns string of the C/I namespaced CLI path that can be associated with the given command. In the context of the provided transaction handle. Returns a tuple ns and path. +Returns string of the C/I namespaced CLI path that can be associated with +the given command. In the context of the provided transaction handle. +Returns a tuple ns and path. Keyword arguments: @@ -477,24 +525,26 @@ Keyword arguments: * nsize -- limit length of namespace * psize -- limit length of path -### cli\_diff\_cmd +### cli_diff_cmd ```python cli_diff_cmd(sock, thandle, thandle_old, flags, path, size) -> str ``` -Get the diff between two sessions as a series C/I cli commands. Returns a string. If no changes exist between the two sessions for the given path a \_ncs.error.Error will be thrown with the error set to ERR\_BADPATH +Get the diff between two sessions as a series C/I cli commands. Returns a +string. If no changes exist between the two sessions for the given path a +_ncs.error.Error will be thrown with the error set to ERR_BADPATH Keyword arguments: * sock -- a python socket instance * thandle -- transaction handle -* thandle\_old -- transaction handle -* flags -- as for cli\_path\_cmd -* path -- as for cli\_path\_cmd +* thandle_old -- transaction handle +* flags -- as for cli_path_cmd +* path -- as for cli_path_cmd * size -- limit diff -### cli\_get +### cli_get ```python cli_get(sock, usess, opt, size) -> str @@ -509,13 +559,18 @@ Keyword arguments: * opt -- option to get * size -- maximum response size (optional, default 1024) -### cli\_path\_cmd +### cli_path_cmd ```python cli_path_cmd(sock, thandle, flags, path, size) -> str ``` -Returns string of the C/I CLI command that can be associated with the given path. The flags can be given as FLAG\_EMIT\_PARENTS to enable the commands to reach the submode for the path to be emitted. The flags can be given as FLAG\_DELETE to emit the command to delete the given path. The flags can be given as FLAG\_NON\_RECURSIVE to prevent that all children to a container or list item are displayed. +Returns string of the C/I CLI command that can be associated with the given +path. The flags can be given as FLAG_EMIT_PARENTS to enable the commands to +reach the submode for the path to be emitted. The flags can be given as +FLAG_DELETE to emit the command to delete the given path. The flags can be +given as FLAG_NON_RECURSIVE to prevent that all children to a container or +list item are displayed. Keyword arguments: @@ -525,7 +580,7 @@ Keyword arguments: * path -- the path for the cmd * size -- limit cmd -### cli\_prompt +### cli_prompt ```python cli_prompt(sock, usess, prompt, echo, size) -> str @@ -538,10 +593,11 @@ Keyword arguments: * sock -- a python socket instance * usess -- user session * prompt -- string to show the user -* echo -- determines wether to control if the input should be echoed or not. ECHO shows the input, NOECHO does not +* echo -- determines wether to control if the input should be echoed or not. + ECHO shows the input, NOECHO does not * size -- maximum response size (optional, default 1024) -### cli\_set +### cli_set ```python cli_set(sock, usess, opt, value) -> None @@ -556,7 +612,7 @@ Keyword arguments: * opt -- option to set * value -- the new value of the session parameter -### cli\_write +### cli_write ```python cli_write(sock, usess, buf) -> None @@ -582,7 +638,7 @@ Keyword arguments: * sock -- a python socket instance -### commit\_trans +### commit_trans ```python commit_trans(sock, thandle) -> None @@ -595,7 +651,7 @@ Keyword arguments: * sock -- a python socket instance * thandle -- transaction handle -### commit\_upgrade +### commit_upgrade ```python commit_upgrade(sock) -> None @@ -607,13 +663,15 @@ Keyword arguments: * sock -- a python socket instance -### confirmed\_commit\_in\_progress +### confirmed_commit_in_progress ```python confirmed_commit_in_progress(sock) -> int ``` -Checks whether a confirmed commit is ongoing. Returns a positive integer being the usid of confirmed commit operation in progress or 0 if no confirmed commit is in progress. +Checks whether a confirmed commit is ongoing. Returns a positive integer +being the usid of confirmed commit operation in progress or 0 if no +confirmed commit is in progress. Keyword arguments: @@ -632,7 +690,7 @@ Keyword arguments: * sock -- a python socket instance * ip -- the ip address * port -- the port -* path -- the path if socket is AF\_UNIX (optional) +* path -- the path if socket is AF_UNIX (optional) ### copy @@ -645,10 +703,10 @@ Copy all data from one data store to another. Keyword arguments: * sock -- a python socket instance -* from\_thandle -- transaction handle -* to\_thandle -- transaction handle +* from_thandle -- transaction handle +* to_thandle -- transaction handle -### copy\_path +### copy_path ```python copy_path(sock, from_thandle, to_thandle, path) -> None @@ -659,11 +717,11 @@ Copy subtree rooted at path from one data store to another. Keyword arguments: * sock -- a python socket instance -* from\_thandle -- transaction handle -* to\_thandle -- transaction handle +* from_thandle -- transaction handle +* to_thandle -- transaction handle * path -- the subtree rooted at path is copied -### copy\_running\_to\_startup +### copy_running_to_startup ```python copy_running_to_startup(sock) -> None @@ -675,7 +733,7 @@ Keyword arguments: * sock -- a python socket instance -### copy\_tree +### copy_tree ```python copy_tree(sock, thandle, frompath, topath) -> None @@ -695,7 +753,9 @@ Keyword arguments: create(sock, thandle, path) -> None ``` -Create a new list entry, a presence container or a leaf of type empty (unless in a union, if type empty is in a union use set\_elem instead) in the data tree. +Create a new list entry, a presence container or a leaf of type empty +(unless in a union, if type empty is in a union +use set_elem instead) in the data tree. Keyword arguments: @@ -703,7 +763,7 @@ Keyword arguments: * thandle -- transaction handle * path -- path of item to create -### cs\_node\_cd +### cs_node_cd ```python cs_node_cd(socket, thandle, path) -> Union[_ncs.CsNode, None] @@ -711,7 +771,9 @@ cs_node_cd(socket, thandle, path) -> Union[_ncs.CsNode, None] Utility function which finds the resulting CsNode given a string keypath. -Does the same thing as \_ncs.cs\_node\_cd(), but can handle paths that are ambiguous due to traversing a mount point, by sending a request to the daemon +Does the same thing as _ncs.cs_node_cd(), but can handle paths that are +ambiguous due to traversing a mount point, by sending a request to the +daemon Keyword arguments: @@ -719,19 +781,24 @@ Keyword arguments: * thandle -- transaction handle * path -- the keypath -### cs\_node\_children +### cs_node_children ```python cs_node_children(sock, thandle, mount_point, path) -> List[_ncs.CsNode] ``` -Retrieve a list of the children nodes of the node given by mount\_point that are valid for path. The mount\_point node must be a mount point (i.e. mount\_point.is\_mount\_point() == True), and the path must lead to a specific instance of this node (including the final keys if mount\_point is a list node). The thandle parameter is optional, i.e. it can be given as -1 if a transaction is not available. +Retrieve a list of the children nodes of the node given by mount_point +that are valid for path. The mount_point node must be a mount point +(i.e. mount_point.is_mount_point() == True), and the path must lead to +a specific instance of this node (including the final keys if mount_point +is a list node). The thandle parameter is optional, i.e. it can be given +as -1 if a transaction is not available. Keyword arguments: * sock -- a python socket instance * thandle -- transaction handle -* mount\_point -- a CsNode instance +* mount_point -- a CsNode instance * path -- the path to the instance of the node ### delete @@ -740,7 +807,8 @@ Keyword arguments: delete(sock, thandle, path) -> None ``` -Delete an existing list entry, a presence container or a leaf of type empty from the data tree. +Delete an existing list entry, a presence container or a leaf of type empty +from the data tree. Keyword arguments: @@ -748,7 +816,7 @@ Keyword arguments: * thandle -- transaction handle * path -- path of item to delete -### delete\_all +### delete_all ```python delete_all(sock, thandle, how) -> None @@ -756,15 +824,21 @@ delete_all(sock, thandle, how) -> None Delete all data within a transaction. -The how argument specifies how to delete: DEL\_SAFE - Delete everything except namespaces that were exported with tailf:export none. Top-level nodes that cannot be deleted due to AAA rules are left in place (descendant nodes may be deleted if the rules allow it). DEL\_EXPORTED - As DEL\_SAFE, but AAA rules are ignored. DEL\_ALL - Delete everything, AAA rules are ignored. +The how argument specifies how to delete: + DEL_SAFE - Delete everything except namespaces that were exported with + tailf:export none. Top-level nodes that cannot be deleted + due to AAA rules are left in place (descendant nodes may be + deleted if the rules allow it). + DEL_EXPORTED - As DEL_SAFE, but AAA rules are ignored. + DEL_ALL - Delete everything, AAA rules are ignored. Keyword arguments: * sock -- a python socket instance * thandle -- transaction handle -* how -- DEL\_SAFE, DEL\_EXPORTED or DEL\_ALL +* how -- DEL_SAFE, DEL_EXPORTED or DEL_ALL -### delete\_config +### delete_config ```python delete_config(sock, name) -> None @@ -777,7 +851,7 @@ Keyword arguments: * sock -- a python socket instance * name -- name of the datastore to empty -### destroy\_cursor +### destroy_cursor ```python destroy_cursor(mc) -> None @@ -795,7 +869,7 @@ Keyword arguments: detach(sock, ctx) -> None ``` -Detaches an attached \_MAAPI socket. +Detaches an attached _MAAPI socket. Keyword arguments: @@ -808,14 +882,15 @@ Keyword arguments: detach2(sock, thandle) -> None ``` -Detaches an attached \_MAAPI socket when we do not have a transaction context available. +Detaches an attached _MAAPI socket when we do not have a transaction context +available. Keyword arguments: * sock -- a python socket instance * thandle -- transaction handle -### diff\_iterate +### diff_iterate ```python diff_iterate(sock, thandle, iter, flags) -> None @@ -823,49 +898,53 @@ diff_iterate(sock, thandle, iter, flags) -> None Iterate through a transaction diff. -For each diff in the transaction the callback function 'iter' will be called. The iter function needs to have the following signature: +For each diff in the transaction the callback function 'iter' will be +called. The iter function needs to have the following signature: -``` -def iter(keypath, operation, oldvalue, newvalue) -``` + def iter(keypath, operation, oldvalue, newvalue) Where arguments are: * keypath - the affected path (HKeypathRef) -* operation - one of MOP\_CREATED, MOP\_DELETED, MOP\_MODIFIED, MOP\_VALUE\_SET, MOP\_MOVED\_AFTER, or MOP\_ATTR\_SET +* operation - one of MOP_CREATED, MOP_DELETED, MOP_MODIFIED, MOP_VALUE_SET, + MOP_MOVED_AFTER, or MOP_ATTR_SET * oldvalue - always None * newvalue - see below -The 'newvalue' argument may be set for operation MOP\_VALUE\_SET and is a Value object in that case. For MOP\_MOVED\_AFTER it may be set to a list of key values identifying an entry in the list - if it's None the list entry has been moved to the beginning of the list. For MOP\_ATTR\_SET it will be set to a 2-tuple of Value's where the first Value is the attribute set and the second Value is the value the attribute was set to. If the attribute has been deleted the second value is of type C\_NOEXISTS +The 'newvalue' argument may be set for operation MOP_VALUE_SET and is a +Value object in that case. For MOP_MOVED_AFTER it may be set to a list of +key values identifying an entry in the list - if it's None the list entry +has been moved to the beginning of the list. For MOP_ATTR_SET it will be +set to a 2-tuple of Value's where the first Value is the attribute set +and the second Value is the value the attribute was set to. If the +attribute has been deleted the second value is of type C_NOEXISTS The iter function should return one of: -* ITER\_STOP - Stop further iteration -* ITER\_RECURSE - Recurse further down the node children -* ITER\_CONTINUE - Ignore node children and continue with the node's siblings +* ITER_STOP - Stop further iteration +* ITER_RECURSE - Recurse further down the node children +* ITER_CONTINUE - Ignore node children and continue with the node's siblings One could also define a class implementing the call function as: -``` -class DiffIterator(object): - def __init__(self): - self.count = 0 + class DiffIterator(object): + def __init__(self): + self.count = 0 - def __call__(self, kp, op, oldv, newv): - print('kp={0}, op={1}, oldv={2}, newv={3}'.format( - str(kp), str(op), str(oldv), str(newv))) - self.count += 1 - return _confd.ITER_RECURSE -``` + def __call__(self, kp, op, oldv, newv): + print('kp={0}, op={1}, oldv={2}, newv={3}'.format( + str(kp), str(op), str(oldv), str(newv))) + self.count += 1 + return _confd.ITER_RECURSE Keyword arguments: * sock -- a python socket instance * thandle -- transaction handle * iter -- iterator function, will be called for every diff in the transaction -* flags -- bitmask of ITER\_WANT\_ATTR and ITER\_WANT\_P\_CONTAINER +* flags -- bitmask of ITER_WANT_ATTR and ITER_WANT_P_CONTAINER -### disconnect\_remote +### disconnect_remote ```python disconnect_remote(sock, address) -> None @@ -878,7 +957,7 @@ Keyword arguments: * sock -- a python socket instance * address -- ip address (string) -### disconnect\_sockets +### disconnect_sockets ```python disconnect_sockets(sock, sockets) -> None @@ -891,13 +970,15 @@ Keyword arguments: * sock -- a python socket instance * sockets -- list of sockets (int) -### do\_display +### do_display ```python do_display(sock, thandle, path) -> int ``` -If the data model uses the YANG when or tailf:display-when statement, this function can be used to determine if the item given by 'path' should be displayed or not. +If the data model uses the YANG when or tailf:display-when statement, this +function can be used to determine if the item given by 'path' should +be displayed or not. Keyword arguments: @@ -905,21 +986,22 @@ Keyword arguments: * thandle -- transaction handle * path -- path to the 'display-when' statement -### end\_progress\_span +### end_progress_span ```python end_progress_span(sock, span, annotation) -> int ``` -Ends a progress span started from start\_progress\_span() or start\_progress\_span\_th(). +Ends a progress span started from start_progress_span() or +start_progress_span_th(). Keyword arguments: - * sock -- a python socket instance -* span -- span\_id (string) or dict with key 'span\_id' -* annotation -- metadata about the event, indicating error, explains latency or shows result etc +* span -- span_id (string) or dict with key 'span_id' +* annotation -- metadata about the event, indicating error, explains latency + or shows result etc -### end\_user\_session +### end_user_session ```python end_user_session(sock) -> None @@ -945,32 +1027,34 @@ Keyword arguments: * thandle -- transaction handle * path -- position to check -### find\_next +### find_next ```python find_next(mc, type, inkeys) -> Union[List[_ncs.Value], bool] ``` -Update the cursor mc with the key(s) for the list entry designated by the type and inkeys parameters. This function may be used to start a traversal from an arbitrary entry in a list. Keys for subsequent entries may be retrieved with the get\_next() function. When no more keys are found, False is returned. +Update the cursor mc with the key(s) for the list entry designated by the +type and inkeys parameters. This function may be used to start a traversal +from an arbitrary entry in a list. Keys for subsequent entries may be +retrieved with the get_next() function. When no more keys are found, False +is returned. The strategy to use is defined by type: -``` -FIND_NEXT - The keys for the first list entry after the one - indicated by the inkeys argument. -FIND_SAME_OR_NEXT - If the values in the inkeys array completely - identifies an actual existing list entry, the keys for - this entry are requested. Otherwise the same logic as - for FIND_NEXT above. -``` + FIND_NEXT - The keys for the first list entry after the one + indicated by the inkeys argument. + FIND_SAME_OR_NEXT - If the values in the inkeys array completely + identifies an actual existing list entry, the keys for + this entry are requested. Otherwise the same logic as + for FIND_NEXT above. Keyword arguments: * mc -- maapiCursor -* type -- CONFD\_FIND\_NEXT or CONFD\_FIND\_SAME\_OR\_NEXT +* type -- CONFD_FIND_NEXT or CONFD_FIND_SAME_OR_NEXT * inkeys -- where to start finding -### finish\_trans +### finish_trans ```python finish_trans(sock, thandle) -> None @@ -978,14 +1062,15 @@ finish_trans(sock, thandle) -> None Finish a transaction. -If the transaction is implemented by an external database, this will invoke the finish() callback. +If the transaction is implemented by an external database, this will invoke +the finish() callback. Keyword arguments: * sock -- a python socket instance * thandle -- transaction handle -### get\_attrs +### get_attrs ```python get_attrs(sock, thandle, attrs, keypath) -> list @@ -1000,7 +1085,7 @@ Keyword arguments: * attrs -- list of type of attributes to get * keypath -- path to choice -### get\_authorization\_info +### get_authorization_info ```python get_authorization_info(sock, usessid) -> _ncs.AuthorizationInfo @@ -1013,7 +1098,7 @@ Keyword arguments: * sock -- a python socket instance * usessid -- user session id -### get\_case +### get_case ```python get_case(sock, thandle, choice, keypath) -> _ncs.Value @@ -1028,7 +1113,7 @@ Keyword arguments: * choice -- choice name * keypath -- path to choice -### get\_elem +### get_elem ```python get_elem(sock, thandle, path) -> _ncs.Value @@ -1042,7 +1127,7 @@ Keyword arguments: * thandle -- transaction handle * path -- position of elem -### get\_my\_user\_session\_id +### get_my_user_session_id ```python get_my_user_session_id(sock) -> int @@ -1054,19 +1139,20 @@ Keyword arguments: * sock -- a python socket instance -### get\_next +### get_next ```python get_next(mc) -> Union[List[_ncs.Value], bool] ``` -Iterates and gets the keys for the next entry in a list. When no more keys are found, False is returned. +Iterates and gets the keys for the next entry in a list. When no more keys +are found, False is returned. Keyword arguments: * mc -- maapiCursor -### get\_object +### get_object ```python get_object(sock, thandle, n, keypath) -> List[_ncs.Value] @@ -1080,13 +1166,14 @@ Keyword arguments: * thandle -- transaction handle * path -- position of list entry -### get\_objects +### get_objects ```python get_objects(mc, n, nobj) -> List[_ncs.Value] ``` -Read at most n values from each nobj lists starting at Cursor mc. Returns a list of Value's. +Read at most n values from each nobj lists starting at Cursor mc. +Returns a list of Value's. Keyword arguments: @@ -1094,61 +1181,87 @@ Keyword arguments: * n -- at most n values will be read * nobj -- number of nobj lists which n elements will be taken from -### get\_rollback\_id +### get_rollback_id ```python get_rollback_id(sock, thandle) -> int ``` -Get rollback id from a committed transaction. Returns int with fixed id, where -1 indicates an error or no rollback id available. +Get rollback id from a committed transaction. Returns int with fixed id, +where -1 indicates an error or no rollback id available. Keyword arguments: * sock -- a python socket instance * thandle -- transaction handle -### get\_running\_db\_status +### get_running_db_status ```python get_running_db_status(sock) -> int ``` -If a transaction fails in the commit() phase, the configuration database is in in a possibly inconsistent state. This function queries ConfD on the consistency state. Returns 1 if the configuration is consistent and 0 otherwise. +If a transaction fails in the commit() phase, the configuration database is +in in a possibly inconsistent state. This function queries ConfD on the +consistency state. Returns 1 if the configuration is consistent and 0 +otherwise. Keyword arguments: * sock -- a python socket instance -### get\_schema\_file\_path +### get_schema_file_path ```python get_schema_file_path(sock) -> str ``` -If shared memory schema support has been enabled, this function will return the pathname of the file used for the shared memory mapping, which can then be passed to the mmap\_schemas() function> +If shared memory schema support has been enabled, this function will +return the pathname of the file used for the shared memory mapping, +which can then be passed to the mmap_schemas() function> -If creation of the schema file is in progress when the function is called, the call will block until the creation has completed. +If creation of the schema file is in progress when the function +is called, the call will block until the creation has completed. Keyword arguments: * sock -- a python socket instance -### get\_stream\_progress +### get_stream_progress ```python get_stream_progress(sock, id) -> int ``` -Used in conjunction with a maapi stream to see how much data has been consumed. +Used in conjunction with a maapi stream to see how much data has been +consumed. -This function allows us to limit the amount of data 'in flight' between the application and the system. The sock parameter must be the maapi socket used for a function call that required a stream socket for writing (currently the only such function is load\_config\_stream()), and the id parameter is the id returned by that function. +This function allows us to limit the amount of data 'in flight' between the +application and the system. The sock parameter must be the maapi socket +used for a function call that required a stream socket for writing +(currently the only such function is load_config_stream()), and the id +parameter is the id returned by that function. Keyword arguments: * sock -- a python socket instance -* id -- the id returned from load\_config\_stream() +* id -- the id returned from load_config_stream() -### get\_templates +### get_template_variables + +```python +get_template_variables(sock, template_name, type) -> list +``` + +Get the template variables for a specific template. + +Keyword arguments: + +* sock -- a python socket instance +* template_name -- the name of the template +* type -- the type of the template (int) + +### get_templates ```python get_templates(sock) -> list @@ -1160,20 +1273,35 @@ Keyword arguments: * sock -- a python socket instance -### get\_trans\_params +### get_trans_mode + +```python +get_trans_mode(sock, thandle, mode) -> int +``` + +Get the transaction mode for a transaction. + +Keyword arguments: + +* sock -- a python socket instance +* thandle -- transaction handle +* mode -- the mode of transaction + +### get_trans_params ```python get_trans_params(sock, thandle) -> list ``` -Get the commit parameters for a transaction. The commit parameters are returned as a list of TagValue objects. +Get the commit parameters for a transaction. The commit parameters are +returned as a list of TagValue objects. Keyword arguments: * sock -- a python socket instance * thandle -- transaction handle -### get\_user\_session +### get_user_session ```python get_user_session(sock, usessid) -> _ncs.UserInfo @@ -1186,7 +1314,7 @@ Keyword arguments: * sock -- a python socket instance * usessid -- session id -### get\_user\_session\_identification +### get_user_session_identification ```python get_user_session_identification(sock, usessid) -> dict @@ -1194,27 +1322,30 @@ get_user_session_identification(sock, usessid) -> dict Get user session identification data. -Get the user identification data related to a user session provided by the 'usessid' argument. The function returns a dict with the user identification data. +Get the user identification data related to a user session provided by the +'usessid' argument. The function returns a dict with the user +identification data. Keyword arguments: * sock -- a python socket instance * usessid -- user session id -### get\_user\_session\_opaque +### get_user_session_opaque ```python get_user_session_opaque(sock, usessid) -> str ``` -Returns a string containing additional 'opaque' information, if additional 'opaque' information is available. +Returns a string containing additional 'opaque' information, if additional +'opaque' information is available. Keyword arguments: * sock -- a python socket instance * usessid -- user session id -### get\_user\_sessions +### get_user_sessions ```python get_user_sessions(sock) -> list @@ -1226,7 +1357,7 @@ Keyword arguments: * sock -- a python socket instance -### get\_values +### get_values ```python get_values(sock, thandle, values, keypath) -> list @@ -1253,7 +1384,7 @@ Keyword arguments: * sock -- a python socket instance * thandle -- transaction handle -### getcwd\_kpath +### getcwd_kpath ```python getcwd_kpath(sock, thandle) -> _ncs.HKeypathRef @@ -1266,37 +1397,39 @@ Keyword arguments: * sock -- a python socket instance * thandle -- transaction handle -### hide\_group +### hide_group ```python hide_group(sock, thandle, group_name) -> None ``` -Hide all nodes belonging to a hide group in a transaction that started with flag FLAG\_HIDE\_ALL\_HIDEGROUPS. +Hide all nodes belonging to a hide group in a transaction that started +with flag FLAG_HIDE_ALL_HIDEGROUPS. Keyword arguments: * sock -- a python socket instance * thandle -- transaction handle -* group\_name -- the group name +* group_name -- the group name -### init\_cursor +### init_cursor ```python init_cursor(sock, thandle, path) -> maapi.Cursor ``` -Whenever we wish to iterate over the entries in a list in the data tree, we must first initialize a cursor. +Whenever we wish to iterate over the entries in a list in the data tree, we +must first initialize a cursor. Keyword arguments: * sock -- a python socket instance * thandle -- transaction handle * path -- position of elem -* secondary\_index -- name of secondary index to use (optional) -* xpath\_expr -- xpath expression used to filter results (optional) +* secondary_index -- name of secondary index to use (optional) +* xpath_expr -- xpath expression used to filter results (optional) -### init\_upgrade +### init_upgrade ```python init_upgrade(sock, timeoutsecs, flags) -> None @@ -1307,8 +1440,10 @@ First step in an upgrade, initializes the upgrade procedure. Keyword arguments: * sock -- a python socket instance -* timeoutsecs -- maximum time to wait for user to voluntarily exit from 'configuration' mode -* flags -- 0 or 'UPGRADE\_KILL\_ON\_TIMEOUT' (will terminate all ongoing transactions +* timeoutsecs -- maximum time to wait for user to voluntarily exit from + 'configuration' mode +* flags -- 0 or 'UPGRADE_KILL_ON_TIMEOUT' (will terminate all ongoing + transactions ### insert @@ -1324,7 +1459,7 @@ Keyword arguments: * thandle -- transaction handle * path -- the subtree rooted at path is copied -### install\_crypto\_keys +### install_crypto_keys ```python install_crypto_keys(sock) -> None @@ -1336,7 +1471,7 @@ Keyword arguments: * sock -- a python socket instance -### is\_candidate\_modified +### is_candidate_modified ```python is_candidate_modified(sock) -> bool @@ -1348,19 +1483,20 @@ Keyword arguments: * sock -- a python socket instance -### is\_lock\_set +### is_lock_set ```python is_lock_set(sock, name) -> int ``` -Check if db name is locked. Return the 'usid' of the user holding the lock or 0 if not locked. +Check if db name is locked. Return the 'usid' of the user holding the lock +or 0 if not locked. Keyword arguments: * sock -- a python socket instance -### is\_running\_modified +### is_running_modified ```python is_running_modified(sock) -> bool @@ -1378,39 +1514,38 @@ Keyword arguments: iterate(sock, thandle, iter, flags, path) -> None ``` -Used to iterate over all the data in a transaction and the underlying data store as opposed to only iterate over changes like diff\_iterate. +Used to iterate over all the data in a transaction and the underlying data +store as opposed to only iterate over changes like diff_iterate. Keyword arguments: * sock -- a python socket instance * thandle -- transaction handle * iter -- iterator function, will be called for every diff in the transaction -* flags -- ITER\_WANT\_ATTR or 0 +* flags -- ITER_WANT_ATTR or 0 * path -- receive only changes from this path and below The iter callback function should have the following signature: -``` -def my_iterator(kp, v, attr_vals) -``` + def my_iterator(kp, v, attr_vals) -### keypath\_diff\_iterate +### keypath_diff_iterate ```python keypath_diff_iterate(sock, thandle, iter, flags, path) -> None ``` -Like diff\_iterate but takes an additional path argument. +Like diff_iterate but takes an additional path argument. Keyword arguments: * sock -- a python socket instance * thandle -- transaction handle * iter -- iterator function, will be called for every diff in the transaction -* flags -- bitmask of ITER\_WANT\_ATTR and ITER\_WANT\_P\_CONTAINER +* flags -- bitmask of ITER_WANT_ATTR and ITER_WANT_P_CONTAINER * path -- receive only changes from this path and below -### kill\_user\_session +### kill_user_session ```python kill_user_session(sock, usessid) -> None @@ -1423,21 +1558,21 @@ Keyword arguments: * sock -- a python socket instance * usessid -- the MAAPI session id to be killed -### load\_config +### load_config ```python load_config(sock, thandle, flags, filename) -> None ``` -Loads configuration from 'filename'. The caller of the function has to indicate which format the file has by using one of the following flags: +Loads configuration from 'filename'. +The caller of the function has to indicate which format the file has by +using one of the following flags: -``` - CONFIG_XML -- XML format - CONFIG_J -- Juniper curly bracket style - CONFIG_C -- Cisco XR style - CONFIG_TURBO_C -- A faster version of CONFIG_C - CONFIG_C_IOS -- Cisco IOS style -``` + CONFIG_XML -- XML format + CONFIG_J -- Juniper curly bracket style + CONFIG_C -- Cisco XR style + CONFIG_TURBO_C -- A faster version of CONFIG_C + CONFIG_C_IOS -- Cisco IOS style Keyword arguments: @@ -1446,7 +1581,7 @@ Keyword arguments: * flags -- as above * filename -- to read the configuration from -### load\_config\_cmds +### load_config_cmds ```python load_config_cmds(sock, thandle, flags, cmds, path) -> None @@ -1461,34 +1596,36 @@ Keyword arguments: * cmds -- a string of cmds * flags -- as above -### load\_config\_stream +### load_config_stream ```python load_config_stream(sock, th, flags) -> int ``` -Loads configuration from the stream socket. The th and flags parameters are the same as for load\_config(). Returns and id. +Loads configuration from the stream socket. The th and flags parameters are +the same as for load_config(). Returns and id. Keyword arguments: * sock -- a python socket instance * thandle -- a transaction handle -* flags -- as for load\_config() +* flags -- as for load_config() -### load\_config\_stream\_result +### load_config_stream_result ```python load_config_stream_result(sock, id) -> int ``` -We use this function to verify that the configuration we wrote on the stream socket was successfully loaded. +We use this function to verify that the configuration we wrote on the +stream socket was successfully loaded. Keyword arguments: * sock -- a python socket instance -* id -- the id returned from load\_config\_stream() +* id -- the id returned from load_config_stream() -### load\_schemas +### load_schemas ```python load_schemas(sock) -> None @@ -1500,7 +1637,7 @@ Keyword arguments: * sock -- a python socket instance -### load\_schemas\_list +### load_schemas_list ```python load_schemas_list(sock, flags, nshash, nsflags) -> None @@ -1528,7 +1665,7 @@ Keyword arguments: * sock -- a python socket instance * name -- name of the database to lock -### lock\_partial +### lock_partial ```python lock_partial(sock, name, xpaths) -> int @@ -1547,7 +1684,8 @@ Keyword arguments: move(sock, thandle, tokey, path) -> None ``` -Moves an existing list entry, i.e. renames the entry using the tokey parameter. +Moves an existing list entry, i.e. renames the entry using the tokey +parameter. Keyword arguments: @@ -1556,7 +1694,7 @@ Keyword arguments: * tokey -- confdValue list * path -- the subtree rooted at path is copied -### move\_ordered +### move_ordered ```python move_ordered(sock, thandle, where, tokey, path) -> None @@ -1572,7 +1710,7 @@ Keyword arguments: * tokey -- confdValue list * path -- the subtree rooted at path is copied -### netconf\_ssh\_call\_home +### netconf_ssh_call_home ```python netconf_ssh_call_home(sock, host, port) -> None @@ -1582,9 +1720,11 @@ Initiates a NETCONF SSH Call Home connection. Keyword arguments: -sock -- a python socket instance host -- an ipv4 addres, ipv6 address, or host name port -- the port to connect to +sock -- a python socket instance +host -- an ipv4 addres, ipv6 address, or host name +port -- the port to connect to -### netconf\_ssh\_call\_home\_opaque +### netconf_ssh_call_home_opaque ```python netconf_ssh_call_home_opaque(sock, host, opaque, port) -> None @@ -1592,9 +1732,13 @@ netconf_ssh_call_home_opaque(sock, host, opaque, port) -> None Initiates a NETCONF SSH Call Home connection. -Keyword arguments: sock -- a python socket instance host -- an ipv4 addres, ipv6 address, or host name opaque -- opaque string passed to an external call home session port -- the port to connect to +Keyword arguments: +sock -- a python socket instance +host -- an ipv4 addres, ipv6 address, or host name +opaque -- opaque string passed to an external call home session +port -- the port to connect to -### num\_instances +### num_instances ```python num_instances(sock, thandle, path) -> int @@ -1608,7 +1752,7 @@ Keyword arguments: * thandle -- transaction handle * path -- position to check -### perform\_upgrade +### perform_upgrade ```python perform_upgrade(sock, loadpathdirs) -> None @@ -1634,7 +1778,7 @@ Keyword arguments: * sock -- a python socket instance * thandle -- transaction handle -### prepare\_trans +### prepare_trans ```python prepare_trans(sock, thandle) -> None @@ -1647,7 +1791,7 @@ Keyword arguments: * sock -- a python socket instance * thandle -- transaction handle -### prepare\_trans\_flags +### prepare_trans_flags ```python prepare_trans_flags(sock, thandle, flags) -> None @@ -1661,13 +1805,14 @@ Keyword arguments: * thandle -- transaction handle * flags -- flags to set in the transaction -### prio\_message +### prio_message ```python prio_message(sock, to, message) -> None ``` -Like sys\_message but will be output directly instead of delivered when the receiver terminates any ongoing command. +Like sys_message but will be output directly instead of delivered when the +receiver terminates any ongoing command. Keyword arguments: @@ -1675,40 +1820,50 @@ Keyword arguments: * to -- user to send message to or 'all' to send to all users * message -- the message -### progress\_info +### progress_info ```python progress_info(sock, msg, verbosity, attrs, links, path) -> None ``` -While spans represents a pair of data points: start and stop; info events are instead singular events, one point in time. Call progress\_info() to write a progress span info event to the progress trace. The info event will have the same span-id as the start and stop events of the currently ongoing progress span in the active user session or transaction. See start\_progress\_span() for more information. +While spans represents a pair of data points: start and stop; info events +are instead singular events, one point in time. Call progress_info() to +write a progress span info event to the progress trace. The info event +will have the same span-id as the start and stop events of the currently +ongoing progress span in the active user session or transaction. See +start_progress_span() for more information. Keyword arguments: * sock -- a python socket instance * msg -- message to report -* verbosity -- VERBOSITY\_\*, default: VERBOSITY\_NORMAL (optional) +* verbosity -- VERBOSITY_*, default: VERBOSITY_NORMAL (optional) * attrs -- user defined attributes (dict) -* links -- to existing traces or spans \[{'trace\_id':'...', 'span\_id':'...'}] +* links -- to existing traces or spans [{'trace_id':'...', 'span_id':'...'}] * path -- keypath to an action/leaf/service -### progress\_info\_th +### progress_info_th ```python progress_info_th(sock, thandle, msg, verbosity, attrs, links, path) -> None ``` -While spans represents a pair of data points: start and stop; info events are instead singular events, one point in time. Call progress\_info() to write a progress span info event to the progress trace. The info event will have the same span-id as the start and stop events of the currently ongoing progress span in the active user session or transaction. See start\_progress\_span() for more information. +While spans represents a pair of data points: start and stop; info events +are instead singular events, one point in time. Call progress_info() to +write a progress span info event to the progress trace. The info event +will have the same span-id as the start and stop events of the currently +ongoing progress span in the active user session or transaction. See +start_progress_span() for more information. Keyword arguments: * sock -- a python socket instance * thandle -- transaction handle * msg -- message to report -* verbosity -- VERBOSITY\_\*, default: VERBOSITY\_NORMAL (optional) +* verbosity -- VERBOSITY_*, default: VERBOSITY_NORMAL (optional) * attrs -- user defined attributes (dict) -* links -- to existing traces or spans \[{'trace\_id':'...', 'span\_id':'...'}] +* links -- to existing traces or spans [{'trace_id':'...', 'span_id':'...'}] * path -- keypath to an action/leaf/service ### pushd @@ -1717,7 +1872,8 @@ Keyword arguments: pushd(sock, thandle, path) -> None ``` -Like cd, but saves the previous position in the tree. This can later be used by popd to return. +Like cd, but saves the previous position in the tree. This can later be used +by popd to return. Keyword arguments: @@ -1725,19 +1881,19 @@ Keyword arguments: * thandle -- transaction handle * path -- position to change to -### query\_free\_result +### query_free_result ```python query_free_result(qrs) -> None ``` -Deallocates the struct returned by 'query\_result()'. +Deallocates the struct returned by 'query_result()'. Keyword arguments: * qrs -- the query result structure to free -### query\_reset +### query_reset ```python query_reset(sock, qh) -> None @@ -1750,7 +1906,7 @@ Keyword arguments: * sock -- a python socket instance * qh -- query handle -### query\_reset\_to +### query_reset_to ```python query_reset_to(sock, qh, offset) -> None @@ -1764,20 +1920,21 @@ Keyword arguments: * qh -- query handle * offset -- offset counted from the beginning -### query\_result +### query_result ```python query_result(sock, qh) -> _ncs.QueryResult ``` -Fetches the next available chunk of results associated with query handle qh. +Fetches the next available chunk of results associated with query handle +qh. Keyword arguments: * sock -- a python socket instance * qh -- query handle -### query\_result\_count +### query_result_count ```python query_result_count(sock, qh) -> int @@ -1790,28 +1947,32 @@ Keyword arguments: * sock -- a python socket instance * qh -- query handle -### query\_start +### query_start ```python query_start(sock, thandle, expr, context_node, chunk_size, initial_offset, result_as, select, sort) -> int ``` -Starts a new query attached to the transaction given in 'th'. Returns a query handle. +Starts a new query attached to the transaction given in 'th'. +Returns a query handle. Keyword arguments: * sock -- a python socket instance * thandle -- transaction handle * expr -- the XPath Path expression to evaluate -* context\_node -- The context node (an ikeypath) for the primary expression, or None (which means that the context node will be /). -* chunk\_size -- How many results to return at a time. If set to 0, a default number will be used. -* initial\_offset -- Which result in line to begin with (1 means to start from the beginning). -* result\_as -- The format the results will be returned in. +* context_node -- The context node (an ikeypath) for the primary expression, + or None (which means that the context node will be /). +* chunk_size -- How many results to return at a time. If set to 0, + a default number will be used. +* initial_offset -- Which result in line to begin with (1 means to start + from the beginning). +* result_as -- The format the results will be returned in. * select -- An array of XPath 'select' expressions. * sort -- An array of XPath expressions which will be used for sorting -### query\_stop +### query_stop ```python query_stop(sock, qh) -> None @@ -1824,28 +1985,27 @@ Keyword arguments: * sock -- a python socket instance * qh -- query handle -### rebind\_listener +### rebind_listener ```python rebind_listener(sock, listener) -> None ``` -Request that the subsystems specified by 'listeners' rebinds its listener socket(s). +Request that the subsystems specified by 'listeners' rebinds its listener +socket(s). Keyword arguments: * sock -- a python socket instance -* listener -- One of the following parameters (ORed together if more than one) +* listener -- One of the following parameters (ORed together if more than one) - ``` - LISTENER_IPC - LISTENER_NETCONF - LISTENER_SNMP - LISTENER_CLI - LISTENER_WEBUI - ``` + LISTENER_IPC + LISTENER_NETCONF + LISTENER_SNMP + LISTENER_CLI + LISTENER_WEBUI -### reload\_config +### reload_config ```python reload_config(sock) -> None @@ -1857,7 +2017,7 @@ Keyword arguments: * sock -- a python socket instance -### reopen\_logs +### reopen_logs ```python reopen_logs(sock) -> None @@ -1869,7 +2029,7 @@ Keyword arguments: * sock -- a python socket instance -### report\_progress +### report_progress ```python report_progress(sock, verbosity, msg) -> None @@ -1877,9 +2037,11 @@ report_progress(sock, verbosity, msg) -> None Report progress events. -This function makes it possible to report transaction/action progress from user code. +This function makes it possible to report transaction/action progress +from user code. -This function is deprecated and will be removed in a future release. Use progress\_info() instead. +This function is deprecated and will be removed in a future release. +Use progress_info() instead. Keyword arguments: @@ -1888,7 +2050,7 @@ Keyword arguments: * verbosity -- at which verbosity level the message should be reported * msg -- message to report -### report\_progress2 +### report_progress2 ```python report_progress2(sock, verbosity, msg, package) -> None @@ -1896,9 +2058,11 @@ report_progress2(sock, verbosity, msg, package) -> None Report progress events. -This function makes it possible to report transaction/action progress from user code. +This function makes it possible to report transaction/action progress +from user code. -This function is deprecated and will be removed in a future release. Use progress\_info() instead. +This function is deprecated and will be removed in a future release. +Use progress_info() instead. Keyword arguments: @@ -1908,17 +2072,20 @@ Keyword arguments: * msg -- message to report * package -- from what package the message is reported -### report\_progress\_start +### report_progress_start ```python report_progress_start(sock, verbosity, msg, package) -> int ``` -Report progress events. Used for calculation of the duration between two events. +Report progress events. +Used for calculation of the duration between two events. -This function makes it possible to report transaction/action progress from user code. +This function makes it possible to report transaction/action progress +from user code. -This function is deprecated and will be removed in a future release. Use start\_progress\_span() instead. +This function is deprecated and will be removed in a future release. +Use start_progress_span() instead. Keyword arguments: @@ -1928,18 +2095,21 @@ Keyword arguments: * msg -- message to report * package -- from what package the message is reported (only NCS) -### report\_progress\_stop +### report_progress_stop ```python report_progress_stop(sock, verbosity, msg, annotation, package, timestamp) -> int ``` -Report progress events. Used for calculation of the duration between two events. +Report progress events. +Used for calculation of the duration between two events. -This function makes it possible to report transaction/action progress from user code. +This function makes it possible to report transaction/action progress +from user code. -This function is deprecated and will be removed in a future release. Use end\_progress\_span() instead. +This function is deprecated and will be removed in a future release. +Use end_progress_span() instead. Keyword arguments: @@ -1947,11 +2117,12 @@ Keyword arguments: * thandle -- transaction handle * verbosity -- at which verbosity level the message should be reported * msg -- message to report -* annotation -- metadata about the event, indicating error, explains latency or shows result etc +* annotation -- metadata about the event, indicating error, explains latency + or shows result etc * package -- from what package the message is reported (only NCS) * timestamp -- start of the event -### report\_service\_progress +### report_service_progress ```python report_service_progress(sock, verbosity, msg, path) -> None @@ -1959,9 +2130,11 @@ report_service_progress(sock, verbosity, msg, path) -> None Report progress events for a service. -This function makes it possible to report transaction progress from FASTMAP code. +This function makes it possible to report transaction progress +from FASTMAP code. -This function is deprecated and will be removed in a future release. Use progress\_info() instead. +This function is deprecated and will be removed in a future release. +Use progress_info() instead. Keyword arguments: @@ -1971,7 +2144,7 @@ Keyword arguments: * msg -- message to report * path -- service instance path -### report\_service\_progress2 +### report_service_progress2 ```python report_service_progress2(sock, verbosity, msg, package, path) -> None @@ -1979,9 +2152,11 @@ report_service_progress2(sock, verbosity, msg, package, path) -> None Report progress events for a service. -This function makes it possible to report transaction progress from FASTMAP code. +This function makes it possible to report transaction progress +from FASTMAP code. -This function is deprecated and will be removed in a future release. Use progress\_info() instead. +This function is deprecated and will be removed in a future release. +Use progress_info() instead. Keyword arguments: @@ -1992,17 +2167,20 @@ Keyword arguments: * package -- from what package the message is reported * path -- service instance path -### report\_service\_progress\_start +### report_service_progress_start ```python report_service_progress_start(sock, verbosity, msg, package, path) -> int ``` -Report progress events for a service. Used for calculation of the duration between two events. +Report progress events for a service. +Used for calculation of the duration between two events. -This function makes it possible to report transaction progress from FASTMAP code. +This function makes it possible to report transaction progress +from FASTMAP code. -This function is deprecated and will be removed in a future release. Use start\_progress\_span() instead. +This function is deprecated and will be removed in a future release. +Use start_progress_span() instead. Keyword arguments: @@ -2013,18 +2191,21 @@ Keyword arguments: * package -- from what package the message is reported * path -- service instance path -### report\_service\_progress\_stop +### report_service_progress_stop ```python report_service_progress_stop(sock, verbosity, msg, annotation, package, path) -> None ``` -Report progress events for a service. Used for calculation of the duration between two events. +Report progress events for a service. +Used for calculation of the duration between two events. -This function makes it possible to report transaction progress from FASTMAP code. +This function makes it possible to report transaction progress +from FASTMAP code. -This function is deprecated and will be removed in a future release. Use end\_progress\_span() instead. +This function is deprecated and will be removed in a future release. +Use end_progress_span() instead. Keyword arguments: @@ -2032,12 +2213,13 @@ Keyword arguments: * thandle -- transaction handle * verbosity -- at which verbosity level the message should be reported * msg -- message to report -* annotation -- metadata about the event, indicating error, explains latency or shows result etc +* annotation -- metadata about the event, indicating error, explains latency + or shows result etc * package -- from what package the message is reported * path -- service instance path * timestamp -- start of the event -### request\_action +### request_action ```python request_action(sock, params, hashed_ns, path) -> list @@ -2049,16 +2231,17 @@ Keyword arguments: * sock -- a python socket instance * params -- tagValue parameters for the action -* hashed\_ns -- namespace +* hashed_ns -- namespace * path -- path to action -### request\_action\_str\_th +### request_action_str_th ```python request_action_str_th(sock, thandle, cmd, path) -> str ``` -The same as request\_action\_th but takes the parameters as a string and returns the result as a string. +The same as request_action_th but takes the parameters as a string and +returns the result as a string. Keyword arguments: @@ -2067,13 +2250,13 @@ Keyword arguments: * cmd -- string parameters * path -- path to action -### request\_action\_th +### request_action_th ```python request_action_th(sock, thandle, params, path) -> list ``` -Same as for request\_action() but uses the current namespace. +Same as for request_action() but uses the current namespace. Keyword arguments: @@ -2095,13 +2278,15 @@ Keyword arguments: * sock -- a python socket instance * thandle -- transaction handle -### roll\_config +### roll_config ```python roll_config(sock, thandle, path) -> int ``` -This function can be used to save the equivalent of a rollback file for a given configuration before it is committed (or a subtree thereof) in curly bracket format. Returns an id +This function can be used to save the equivalent of a rollback file for a +given configuration before it is committed (or a subtree thereof) in curly +bracket format. Returns an id Keyword arguments: @@ -2109,64 +2294,64 @@ Keyword arguments: * thandle -- transaction handle * path -- tree for which to save the rollback configuration -### roll\_config\_result +### roll_config_result ```python roll_config_result(sock, id) -> int ``` -We use this function to assert that we received the entire rollback configuration over a stream socket. +We use this function to assert that we received the entire rollback +configuration over a stream socket. Keyword arguments: * sock -- a python socket instance -* id -- the id returned from roll\_config() +* id -- the id returned from roll_config() -### save\_config +### save_config ```python save_config(sock, thandle, flags, path) -> int ``` -Save the config, returns an id. The flags parameter controls the saving as follows. The value is a bitmask. - -``` - CONFIG_XML -- The configuration format is XML. - CONFIG_XML_PRETTY -- The configuration format is pretty printed XML. - CONFIG_JSON -- The configuration is in JSON format. - CONFIG_J -- The configuration is in curly bracket Juniper CLI - format. - CONFIG_C -- The configuration is in Cisco XR style format. - CONFIG_TURBO_C -- The configuration is in Cisco XR style format. - A faster parser than the normal CLI will be used. - CONFIG_C_IOS -- The configuration is in Cisco IOS style format. - CONFIG_XPATH -- The path gives an XPath filter instead of a - keypath. Can only be used with CONFIG_XML and - CONFIG_XML_PRETTY. - CONFIG_WITH_DEFAULTS -- Default values are part of the - configuration dump. - CONFIG_SHOW_DEFAULTS -- Default values are also shown next to - the real configuration value. Applies only to the CLI formats. - CONFIG_WITH_OPER -- Include operational data in the dump. - CONFIG_HIDE_ALL -- Hide all hidden nodes. - CONFIG_UNHIDE_ALL -- Unhide all hidden nodes. - CONFIG_WITH_SERVICE_META -- Include NCS service-meta-data - attributes(refcounter, backpointer, out-of-band and - original-value) in the dump. - CONFIG_NO_PARENTS -- When a path is provided its parent nodes are by - default included. With this option the output will begin - immediately at path - skipping any parents. - CONFIG_OPER_ONLY -- Include only operational data, and ancestors to - operational data nodes, in the dump. - CONFIG_NO_BACKQUOTE -- This option can only be used together with - CONFIG_C and CONFIG_C_IOS. When set backslash will not be quoted - in strings. - CONFIG_CDB_ONLY -- Include only data stored in CDB in the dump. By - default only configuration data is included, but the flag can be - combined with either CONFIG_WITH_OPER or CONFIG_OPER_ONLY to - save both configuration and operational data, or only - operational data, respectively. -``` +Save the config, returns an id. +The flags parameter controls the saving as follows. The value is a bitmask. + + CONFIG_XML -- The configuration format is XML. + CONFIG_XML_PRETTY -- The configuration format is pretty printed XML. + CONFIG_JSON -- The configuration is in JSON format. + CONFIG_J -- The configuration is in curly bracket Juniper CLI + format. + CONFIG_C -- The configuration is in Cisco XR style format. + CONFIG_TURBO_C -- The configuration is in Cisco XR style format. + A faster parser than the normal CLI will be used. + CONFIG_C_IOS -- The configuration is in Cisco IOS style format. + CONFIG_XPATH -- The path gives an XPath filter instead of a + keypath. Can only be used with CONFIG_XML and + CONFIG_XML_PRETTY. + CONFIG_WITH_DEFAULTS -- Default values are part of the + configuration dump. + CONFIG_SHOW_DEFAULTS -- Default values are also shown next to + the real configuration value. Applies only to the CLI formats. + CONFIG_WITH_OPER -- Include operational data in the dump. + CONFIG_HIDE_ALL -- Hide all hidden nodes. + CONFIG_UNHIDE_ALL -- Unhide all hidden nodes. + CONFIG_WITH_SERVICE_META -- Include NCS service-meta-data + attributes(refcounter, backpointer, out-of-band and + original-value) in the dump. + CONFIG_NO_PARENTS -- When a path is provided its parent nodes are by + default included. With this option the output will begin + immediately at path - skipping any parents. + CONFIG_OPER_ONLY -- Include only operational data, and ancestors to + operational data nodes, in the dump. + CONFIG_NO_BACKQUOTE -- This option can only be used together with + CONFIG_C and CONFIG_C_IOS. When set backslash will not be quoted + in strings. + CONFIG_CDB_ONLY -- Include only data stored in CDB in the dump. By + default only configuration data is included, but the flag can be + combined with either CONFIG_WITH_OPER or CONFIG_OPER_ONLY to + save both configuration and operational data, or only + operational data, respectively. Keyword arguments: @@ -2175,7 +2360,7 @@ Keyword arguments: * flags -- as above * path -- save only configuration below path -### save\_config\_result +### save_config_result ```python save_config_result(sock, id) -> None @@ -2186,9 +2371,9 @@ Verify that we received the entire configuration over the stream socket. Keyword arguments: * sock -- a python socket instance -* id -- the id returned from save\_config +* id -- the id returned from save_config -### set\_attr +### set_attr ```python set_attr(sock, thandle, attr, v, keypath) -> None @@ -2204,13 +2389,14 @@ Keyword arguments: * v -- value to set the attribute to * keypath -- path to choice -### set\_comment +### set_comment ```python set_comment(sock, thandle, comment) -> None ``` -Set the Comment that is stored in the rollback file when a transaction towards running is committed. +Set the Comment that is stored in the rollback file when a transaction +towards running is committed. Keyword arguments: @@ -2218,13 +2404,14 @@ Keyword arguments: * thandle -- transaction handle * comment -- the Comment -### set\_delayed\_when +### set_delayed_when ```python set_delayed_when(sock, thandle, on) -> None ``` -This function enables (on non-zero) or disables (on == 0) the 'delayed when' mode of a transaction. +This function enables (on non-zero) or disables (on == 0) the 'delayed when' +mode of a transaction. Keyword arguments: @@ -2232,7 +2419,7 @@ Keyword arguments: * thandle -- transaction handle * on -- disables when on=0, enables for all other n -### set\_elem +### set_elem ```python set_elem(sock, thandle, v, path) -> None @@ -2247,7 +2434,7 @@ Keyword arguments: * v -- confdValue * path -- position of elem -### set\_elem2 +### set_elem2 ```python set_elem2(sock, thandle, strval, path) -> None @@ -2262,13 +2449,13 @@ Keyword arguments: * strval -- confdValue * path -- position of elem -### set\_flags +### set_flags ```python set_flags(sock, thandle, flags) -> None ``` -Modify read/write session aspect. See MAAPI\_FLAG\_xyz. +Modify read/write session aspect. See MAAPI_FLAG_xyz. Keyword arguments: @@ -2276,13 +2463,14 @@ Keyword arguments: * thandle -- transaction handle * flags -- flags to set -### set\_label +### set_label ```python set_label(sock, thandle, label) -> None ``` -Set the Label that is stored in the rollback file when a transaction towards running is committed. +Set the Label that is stored in the rollback file when a transaction +towards running is committed. Keyword arguments: @@ -2290,7 +2478,7 @@ Keyword arguments: * thandle -- transaction handle * label -- the Label -### set\_namespace +### set_namespace ```python set_namespace(sock, thandle, hashed_ns) -> None @@ -2302,22 +2490,25 @@ Keyword arguments: * sock -- a python socket instance * thandle -- transaction handle -* hashed\_ns -- the namespace to use +* hashed_ns -- the namespace to use -### set\_next\_user\_session\_id +### set_next_user_session_id ```python set_next_user_session_id(sock, usessid) -> None ``` -Set the user session id that will be assigned to the next user session started. The given value is silently forced to be in the range 100 .. 2^31-1. This function can be used to ensure that session ids for user sessions started by northbound agents or via MAAPI are unique across a restart. +Set the user session id that will be assigned to the next user session +started. The given value is silently forced to be in the range 100 .. 2^31-1. +This function can be used to ensure that session ids for user sessions +started by northbound agents or via MAAPI are unique across a restart. Keyword arguments: * sock -- a python socket instance * usessid -- user session id -### set\_object +### set_object ```python set_object(sock, thandle, values, keypath) -> None @@ -2332,7 +2523,7 @@ Keyword arguments: * values -- list of values * keypath -- path to set -### set\_readonly\_mode +### set_readonly_mode ```python set_readonly_mode(sock, flag) -> None @@ -2345,7 +2536,7 @@ Keyword arguments: * sock -- a python socket instance * flag -- non-zero means read-only mode -### set\_running\_db\_status +### set_running_db_status ```python set_running_db_status(sock, status) -> None @@ -2358,7 +2549,7 @@ Keyword arguments: * sock -- a python socket instance * status -- integer status to set -### set\_user\_session +### set_user_session ```python set_user_session(sock, usessid) -> None @@ -2371,7 +2562,7 @@ Keyword arguments: * sock -- a python socket instance * usessid -- user session id -### set\_values +### set_values ```python set_values(sock, thandle, values, keypath) -> None @@ -2386,13 +2577,13 @@ Keyword arguments: * values -- list of tagValues * keypath -- path to set -### shared\_apply\_template +### shared_apply_template ```python shared_apply_template(sock, thandle, template, variables,flags, rootpath) -> None ``` -FASTMAP version of ncs\_apply\_template. +FASTMAP version of ncs_apply_template. Keyword arguments: @@ -2403,13 +2594,13 @@ Keyword arguments: * flags -- Must be set as 0 * rootpath -- in what context to apply the template -### shared\_copy\_tree +### shared_copy_tree ```python shared_copy_tree(sock, thandle, flags, frompath, topath) -> None ``` -FASTMAP version of copy\_tree. +FASTMAP version of copy_tree. Keyword arguments: @@ -2419,7 +2610,7 @@ Keyword arguments: * frompath -- the path to copy the tree from * topath -- the path to copy the tree to -### shared\_create +### shared_create ```python shared_create(sock, thandle, flags, path) -> None @@ -2433,7 +2624,7 @@ Keyword arguments: * thandle -- transaction handle * flags -- Must be set as 0 -### shared\_insert +### shared_insert ```python shared_insert(sock, thandle, flags, path) -> None @@ -2448,13 +2639,13 @@ Keyword arguments: * flags -- Must be set as 0 * path -- the path to the list to insert a new entry into -### shared\_set\_elem +### shared_set_elem ```python shared_set_elem(sock, thandle, v, flags, path) -> None ``` -FASTMAP version of set\_elem. +FASTMAP version of set_elem. Keyword arguments: @@ -2464,13 +2655,13 @@ Keyword arguments: * flags -- should be 0 * path -- the path to the element to set -### shared\_set\_elem2 +### shared_set_elem2 ```python shared_set_elem2(sock, thandle, strval, flags, path) -> None ``` -FASTMAP version of set\_elem2. +FASTMAP version of set_elem2. Keyword arguments: @@ -2480,13 +2671,13 @@ Keyword arguments: * flags -- should be 0 * path -- the path to the element to set -### shared\_set\_values +### shared_set_values ```python shared_set_values(sock, thandle, values, flags, keypath) -> None ``` -FASTMAP version of set\_values. +FASTMAP version of set_values. Keyword arguments: @@ -2496,7 +2687,7 @@ Keyword arguments: * flags -- should be 0 * keypath -- path to set -### snmpa\_reload +### snmpa_reload ```python snmpa_reload(sock, synchronous) -> None @@ -2504,149 +2695,184 @@ snmpa_reload(sock, synchronous) -> None Start a reload of SNMP Agent config from external data provider. -Used by external data provider to notify that there is a change to the SNMP Agent config data. Calling the function with the argument 'synchronous' set to 1 or True means that the call will block until the loading is completed. +Used by external data provider to notify that there is a change to the SNMP +Agent config data. Calling the function with the argument 'synchronous' set +to 1 or True means that the call will block until the loading is completed. Keyword arguments: * sock -- a python socket instance -* synchronous -- if 1, will wait for the loading complete and return when the loading is complete; if 0, will only initiate the loading and return immediately +* synchronous -- if 1, will wait for the loading complete and return when + the loading is complete; if 0, will only initiate the loading and return + immediately -### start\_phase +### start_phase ```python start_phase(sock, phase, synchronous) -> None ``` -When the system has been started in phase0, this function tells the system to proceed to start phase 1 or 2. +When the system has been started in phase0, this function tells the system +to proceed to start phase 1 or 2. Keyword arguments: * sock -- a python socket instance * phase -- phase to start, 1 or 2 -* synchronous -- if 1, will wait for the loading complete and return when the loading is complete; if 0, will only initiate the loading of AAA data and return immediately +* synchronous -- if 1, will wait for the loading complete and return when + the loading is complete; if 0, will only initiate the loading of AAA + data and return immediately -### start\_progress\_span +### start_progress_span ```python start_progress_span(sock, msg, verbosity, attrs, links, path) -> dict ``` -Starts a progress span. Progress spans are trace messages written to the progress trace and the developer log. A progress span consists of a start and a stop event which can be used to calculate the duration between the two. Those events can be identified with unique span-ids. Inside the span it is possible to start new spans, which will then become child spans, the parent-span-id is set to the previous spans' span-id. A child span can be used to calculate the duration of a sub task, and is started from consecutive maapi\_start\_progress\_span() calls, and is ended with maapi\_end\_progress\_span(). +Starts a progress span. Progress spans are trace messages written to the +progress trace and the developer log. A progress span consists of a start +and a stop event which can be used to calculate the duration between the +two. Those events can be identified with unique span-ids. Inside the span +it is possible to start new spans, which will then become child spans, +the parent-span-id is set to the previous spans' span-id. A child span +can be used to calculate the duration of a sub task, and is started from +consecutive maapi_start_progress_span() calls, and is ended with +maapi_end_progress_span(). -The concepts of traces, trace-id and spans are highly influenced by https://opentelemetry.io/docs/concepts/signals/traces/#spans +The concepts of traces, trace-id and spans are highly influenced by +https://opentelemetry.io/docs/concepts/signals/traces/#spans Keyword arguments: * sock -- a python socket instance * msg -- message to report -* verbosity -- VERBOSITY\_\*, default: VERBOSITY\_NORMAL (optional) +* verbosity -- VERBOSITY_*, default: VERBOSITY_NORMAL (optional) * attrs -- user defined attributes (dict) -* links -- to existing traces or spans \[{'trace\_id':'...', 'span\_id':'...'}] +* links -- to existing traces or spans [{'trace_id':'...', 'span_id':'...'}] * path -- keypath to an action/leaf/service -### start\_progress\_span\_th +### start_progress_span_th ```python start_progress_span_th(sock, thandle, msg, verbosity, attrs, links, path) -> dict ``` -Starts a progress span. Progress spans are trace messages written to the progress trace and the developer log. A progress span consists of a start and a stop event which can be used to calculate the duration between the two. Those events can be identified with unique span-ids. Inside the span it is possible to start new spans, which will then become child spans, the parent-span-id is set to the previous spans' span-id. A child span can be used to calculate the duration of a sub task, and is started from consecutive maapi\_start\_progress\_span() calls, and is ended with maapi\_end\_progress\_span(). +Starts a progress span. Progress spans are trace messages written to the +progress trace and the developer log. A progress span consists of a start +and a stop event which can be used to calculate the duration between the +two. Those events can be identified with unique span-ids. Inside the span +it is possible to start new spans, which will then become child spans, +the parent-span-id is set to the previous spans' span-id. A child span +can be used to calculate the duration of a sub task, and is started from +consecutive maapi_start_progress_span() calls, and is ended with +maapi_end_progress_span(). -The concepts of traces, trace-id and spans are highly influenced by https://opentelemetry.io/docs/concepts/signals/traces/#spans +The concepts of traces, trace-id and spans are highly influenced by +https://opentelemetry.io/docs/concepts/signals/traces/#spans Keyword arguments: * sock -- a python socket instance * thandle -- transaction handle * msg -- message to report -* verbosity -- VERBOSITY\_\*, default: VERBOSITY\_NORMAL (optional) +* verbosity -- VERBOSITY_*, default: VERBOSITY_NORMAL (optional) * attrs -- user defined attributes (dict) -* links -- to existing traces or spans \[{'trace\_id':'...', 'span\_id':'...'}] +* links -- to existing traces or spans [{'trace_id':'...', 'span_id':'...'}] * path -- keypath to an action/leaf/service -### start\_trans +### start_trans ```python start_trans(sock, name, readwrite) -> int ``` -Creates a new transaction towards the data store specified by name, which can be one of CONFD\_CANDIDATE, CONFD\_RUNNING, or CONFD\_STARTUP (however updating the startup data store is better done via maapi\_copy\_running\_to\_startup()). The readwrite parameter can be either CONFD\_READ, to start a readonly transaction, or CONFD\_READ\_WRITE, to start a read-write transaction. The function returns the transaction id. +Creates a new transaction towards the data store specified by name, which +can be one of CONFD_CANDIDATE, CONFD_RUNNING, or CONFD_STARTUP (however +updating the startup data store is better done via +maapi_copy_running_to_startup()). The readwrite parameter can be either +CONFD_READ, to start a readonly transaction, or CONFD_READ_WRITE, to start +a read-write transaction. The function returns the transaction id. Keyword arguments: * sock -- a python socket instance * name -- name of the database -* readwrite -- CONFD\_READ or CONFD\_WRITE +* readwrite -- CONFD_READ or CONFD_WRITE -### start\_trans2 +### start_trans2 ```python start_trans2(sock, name, readwrite, usid) -> int ``` -Start a transaction within an existing user session, returns the transaction id. +Start a transaction within an existing user session, returns the transaction +id. Keyword arguments: * sock -- a python socket instance * name -- name of the database -* readwrite -- CONFD\_READ or CONFD\_WRITE +* readwrite -- CONFD_READ or CONFD_WRITE * usid -- user session id -### start\_trans\_flags +### start_trans_flags ```python start_trans_flags(sock, name, readwrite, usid) -> int ``` -The same as start\_trans2, but can also set the same flags that 'set\_flags' can set. +The same as start_trans2, but can also set the same flags that 'set_flags' +can set. Keyword arguments: * sock -- a python socket instance * name -- name of the database -* readwrite -- CONFD\_READ or CONFD\_WRITE +* readwrite -- CONFD_READ or CONFD_WRITE * usid -- user session id -* flags -- same as for 'set\_flags' +* flags -- same as for 'set_flags' -### start\_trans\_flags2 +### start_trans_flags2 ```python start_trans_flags2(sock, name, readwrite, usid, vendor, product, version, client_id) -> int ``` -This function does the same as start\_trans\_flags() but allows for additional information to be passed to ConfD/NCS. +This function does the same as start_trans_flags() but allows for +additional information to be passed to ConfD/NCS. Keyword arguments: * sock -- a python socket instance * name -- name of the database -* readwrite -- CONFD\_READ or CONFD\_WRITE +* readwrite -- CONFD_READ or CONFD_WRITE * usid -- user session id -* flags -- same as for 'set\_flags' +* flags -- same as for 'set_flags' * vendor -- vendor string (may be None) * product -- product string (may be None) * version -- version string (may be None) -* client\_id -- client identification string (may be None) +* client_id -- client identification string (may be None) -### start\_trans\_in\_trans +### start_trans_in_trans ```python start_trans_in_trans(sock, readwrite, usid, thandle) -> int ``` -Start a transaction within an existing transaction, using the started transaction as backend instead of an actual data store. Returns the transaction id as an integer. +Start a transaction within an existing transaction, using the started +transaction as backend instead of an actual data store. Returns the +transaction id as an integer. Keyword arguments: * sock -- a python socket instance -* readwrite -- CONFD\_READ or CONFD\_WRITE +* readwrite -- CONFD_READ or CONFD_WRITE * usid -- user session id * thandle -- identifies the backend transaction to use -### start\_user\_session +### start_user_session ```python start_user_session(sock, username, context, groups, src_addr, prot) -> None @@ -2663,7 +2889,7 @@ Keyword arguments: * src-addr -- src address of e.g. the client connecting * prot -- the protocol used by the client for connecting -### start\_user\_session2 +### start_user_session2 ```python start_user_session2(sock, username, context, groups, src_addr, src_port, prot) -> None @@ -2681,7 +2907,7 @@ Keyword arguments: * src-port -- src port of e.g. the client connecting * prot -- the protocol used by the client for connecting -### start\_user\_session3 +### start_user_session3 ```python start_user_session3(sock, username, context, groups, src_addr, src_port, prot, vendor, product, version, client_id) -> None @@ -2689,7 +2915,8 @@ start_user_session3(sock, username, context, groups, src_addr, src_port, prot, v Establish a user session on the socket. -This function does the same as start\_user\_session2() but allows for additional information to be passed to ConfD/NCS. +This function does the same as start_user_session2() but allows for +additional information to be passed to ConfD/NCS. Keyword arguments: @@ -2703,9 +2930,9 @@ Keyword arguments: * vendor -- vendor string (may be None) * product -- product string (may be None) * version -- version string (may be None) -* client\_id -- client identification string (may be None) +* client_id -- client identification string (may be None) -### start\_user\_session\_gen +### start_user_session_gen ```python start_user_session_gen(sock, username, context, groups, vendor, product, version, client_id) -> None @@ -2713,7 +2940,8 @@ start_user_session_gen(sock, username, context, groups, vendor, product, versio Establish a user session on the socket. -This function does the same as start\_user\_session3() but it takes the source address of the supplied socket from the OS. +This function does the same as start_user_session3() but +it takes the source address of the supplied socket from the OS. Keyword arguments: @@ -2724,7 +2952,7 @@ Keyword arguments: * vendor -- vendor string (may be None) * product -- product string (may be None) * version -- version string (may be None) -* client\_id -- client identification string (may be None) +* client_id -- client identification string (may be None) ### stop @@ -2738,13 +2966,14 @@ Keyword arguments: * sock -- a python socket instance -### sys\_message +### sys_message ```python sys_message(sock, to, message) -> None ``` -Send a message to a specific user, a specific session or all user depending on the 'to' parameter. 'all', or can be used. +Send a message to a specific user, a specific session or all user depending +on the 'to' parameter. 'all', or can be used. Keyword arguments: @@ -2752,19 +2981,20 @@ Keyword arguments: * to -- user to send message to or 'all' to send to all users * message -- the message -### unhide\_group +### unhide_group ```python unhide_group(sock, thandle, group_name) -> None ``` -Unhide all nodes belonging to a hide group in a transaction that started with flag FLAG\_HIDE\_ALL\_HIDEGROUPS. +Unhide all nodes belonging to a hide group in a transaction that started +with flag FLAG_HIDE_ALL_HIDEGROUPS. Keyword arguments: * sock -- a python socket instance * thandle -- transaction handle -* group\_name -- the group name +* group_name -- the group name ### unlock @@ -2779,7 +3009,7 @@ Keyword arguments: * sock -- a python socket instance * name -- name of the database to unlock -### unlock\_partial +### unlock_partial ```python unlock_partial(sock, lockid) -> None @@ -2792,7 +3022,7 @@ Keyword arguments: * sock -- a python socket instance * lockid -- id of the lock -### user\_message +### user_message ```python user_message(sock, to, message, sender) -> None @@ -2807,7 +3037,7 @@ Keyword arguments: * message -- the message * sender -- send as -### validate\_trans +### validate_trans ```python validate_trans(sock, thandle, unlock, forcevalidation) -> None @@ -2815,11 +3045,20 @@ validate_trans(sock, thandle, unlock, forcevalidation) -> None Validates all data written in a transaction. -If unlock is 1 (or True), the transaction is open for further editing even if validation succeeds. If unlock is 0 (or False) and the function returns CONFD\_OK, the next function to be called MUST be maapi\_prepare\_trans() or maapi\_finish\_trans(). +If unlock is 1 (or True), the transaction is open for further editing even +if validation succeeds. If unlock is 0 (or False) and the function returns +CONFD_OK, the next function to be called MUST be maapi_prepare_trans() or +maapi_finish_trans(). -unlock = 1 can be used to implement a 'validate' command which can be given in the middle of an editing session. The first thing that happens is that a lock is set. If unlock == 1, the lock is released on success. The lock is always released on failure. +unlock = 1 can be used to implement a 'validate' command which can be +given in the middle of an editing session. The first thing that happens is +that a lock is set. If unlock == 1, the lock is released on success. The +lock is always released on failure. -The forcevalidation argument should normally be 0 (or False). It has no effect for a transaction towards the running or startup data stores, validation is always performed. For a transaction towards the candidate data store, validation will not be done unless forcevalidation is non-zero. +The forcevalidation argument should normally be 0 (or False). It has no +effect for a transaction towards the running or startup data stores, +validation is always performed. For a transaction towards the candidate +data store, validation will not be done unless forcevalidation is non-zero. Keyword arguments: @@ -2828,7 +3067,7 @@ Keyword arguments: * unlock -- int or bool * forcevalidation -- int or bool -### wait\_start +### wait_start ```python wait_start(sock, phase) -> None @@ -2841,7 +3080,7 @@ Keyword arguments: * sock -- a python socket instance * phase -- phase to wait for, 0, 1 or 2 -### write\_service\_log\_entry +### write_service_log_entry ```python write_service_log_entry(sock, path, msg, type, level) -> None @@ -2849,7 +3088,8 @@ write_service_log_entry(sock, path, msg, type, level) -> None Write service log entries. -This function makes it possible to write service log entries from FASTMAP code. +This function makes it possible to write service log entries from +FASTMAP code. Keyword arguments: @@ -2872,7 +3112,7 @@ Keyword arguments: * sock -- a python socket instance * xpath -- to convert -### xpath2kpath\_th +### xpath2kpath_th ```python xpath2kpath_th(sock, thandle, xpath) -> _ncs.HKeypathRef @@ -2886,13 +3126,21 @@ Keyword arguments: * thandle -- transaction handle * xpath -- to convert -### xpath\_eval +### xpath_eval ```python xpath_eval(sock, thandle, expr, result, trace, path) -> None ``` -Evaluate the xpath expression in 'expr'. For each node in the resulting node the function 'result' is called with the keypath to the resulting node as the first argument and, if the node is a leaf and has a value. the value of that node as the second argument. For each invocation of 'result' the function should return ITER\_CONTINUE to tell the XPath evaluator to continue or ITER\_STOP to stop the evaluation. A trace function, 'pytrace', could be supplied and will be called with a single string as an argument. 'None' can be used if no trace is needed. Unless a 'path' is given the root node will be used as a context for the evaluations. +Evaluate the xpath expression in 'expr'. For each node in the resulting +node the function 'result' is called with the keypath to the resulting +node as the first argument and, if the node is a leaf and has a value. the +value of that node as the second argument. For each invocation of 'result' +the function should return ITER_CONTINUE to tell the XPath evaluator to +continue or ITER_STOP to stop the evaluation. A trace function, 'pytrace', +could be supplied and will be called with a single string as an argument. +'None' can be used if no trace is needed. Unless a 'path' is given the +root node will be used as a context for the evaluations. Keyword arguments: @@ -2903,13 +3151,13 @@ Keyword arguments: * trace -- a trace function that takes a string as a parameter * path -- the context node -### xpath\_eval\_expr +### xpath_eval_expr ```python xpath_eval_expr(sock, thandle, expr, trace, path) -> str ``` -Like xpath\_eval but returns a string. +Like xpath_eval but returns a string. Keyword arguments: @@ -2919,11 +3167,12 @@ Keyword arguments: * trace -- a trace function that takes a string as a parameter * path -- the context node + ## Classes ### _class_ **Cursor** -struct maapi\_cursor object +struct maapi_cursor object Members: diff --git a/developer-reference/pyapi/_ncs.md b/developer-reference/pyapi/_ncs.md index cda0def3..29cb1e62 100644 --- a/developer-reference/pyapi/_ncs.md +++ b/developer-reference/pyapi/_ncs.md @@ -1,29 +1,33 @@ -# \_ncs Module +# Python _ncs Module NCS Python low level module. -This module and its submodules provide Python bindings for the C APIs, described by the [confd\_lib(3)](../../resources/man/confd_lib.3.md) man page. +This module and its submodules provide Python bindings for the C APIs, +described by the [confd_lib(3)](../../resources/man/confd_lib.3.md) man page. -The companion high level module, ncs, provides an abstraction layer on top of this module and may be easier to use. +The companion high level module, ncs, provides an abstraction layer on top of +this module and may be easier to use. ## Submodules -* [\_ncs.cdb](_ncs.cdb.md): Low level module for connecting to NCS built-in XML database (CDB). -* [\_ncs.dp](_ncs.dp.md): Low level callback module for connecting data providers to NCS. -* [\_ncs.error](_ncs.error.md): This module defines new NCS Python API exception classes. -* [\_ncs.events](_ncs.events.md): Low level module for subscribing to NCS event notifications. -* [\_ncs.ha](_ncs.ha.md): Low level module for connecting to NCS HA subsystem. -* [\_ncs.maapi](_ncs.maapi.md): Low level module for connecting to NCS with a read/write interface inside transactions. +- [_ncs.cdb](_ncs.cdb.md): Low level module for connecting to NCS built-in XML database (CDB). +- [_ncs.dp](_ncs.dp.md): Low level callback module for connecting data providers to NCS. +- [_ncs.error](_ncs.error.md): This module defines new NCS Python API exception classes. +- [_ncs.events](_ncs.events.md): Low level module for subscribing to NCS event notifications. +- [_ncs.ha](_ncs.ha.md): Low level module for connecting to NCS HA subsystem. +- [_ncs.maapi](_ncs.maapi.md): Low level module for connecting to NCS with a read/write interface +inside transactions. ## Functions -### cs\_node\_cd +### cs_node_cd ```python cs_node_cd(start, path) -> Union[CsNode, None] ``` -Utility function which finds the resulting CsNode given an (optional) starting node and a (relative or absolute) string keypath. +Utility function which finds the resulting CsNode given an (optional) +starting node and a (relative or absolute) string keypath. Keyword arguments: @@ -36,23 +40,28 @@ Keyword arguments: decrypt(ciphertext) -> str ``` -When data is read over the CDB interface, the MAAPI interface or received in event notifications, the data for the builtin types tailf:aes-cfb-128-encrypted-string and tailf:aes-256-cfb-128-encrypted-string is encrypted. This function decrypts ciphertext and returns the clear text as a string. +When data is read over the CDB interface, the MAAPI interface or received +in event notifications, the data for the builtin types +tailf:aes-cfb-128-encrypted-string and +tailf:aes-256-cfb-128-encrypted-string is encrypted. +This function decrypts ciphertext and returns the clear text as +a string. Keyword arguments: * ciphertext -- encrypted string -### expr\_op2str +### expr_op2str ```python expr_op2str(op) -> str ``` -Convert confd\_expr\_op value to a string. +Convert confd_expr_op value to a string. Keyword arguments: -* op -- confd\_expr\_op integer value +* op -- confd_expr_op integer value ### fatal @@ -60,84 +69,104 @@ Keyword arguments: fatal(str) -> None ``` -Utility function which formats a string, prints it to stderr and exits with exit code 1. This function will never return. +Utility function which formats a string, prints it to stderr and exits with +exit code 1. This function will never return. Keyword arguments: * str -- a message string -### find\_cs\_node +### find_cs_node ```python find_cs_node(hkeypath, len) -> Union[CsNode, None] ``` -Utility function which finds the CsNode corresponding to the len first elements of the hashed keypath. To make the search consider the full keypath leave out the len parameter. +Utility function which finds the CsNode corresponding to the len first +elements of the hashed keypath. To make the search consider the full +keypath leave out the len parameter. Keyword arguments: * hkeypath -- a HKeypathRef instance * len -- number of elements to return (optional) -### find\_cs\_node\_child +### find_cs_node_child ```python find_cs_node_child(parent, xmltag) -> Union[CsNode, None] ``` -Utility function which finds the CsNode corresponding to the child node given as xmltag. +Utility function which finds the CsNode corresponding to the child node +given as xmltag. -See confd\_find\_cs\_node\_child() in [confd\_lib\_lib(3)](../../resources/man/confd_lib_lib.3.md). +See confd_find_cs_node_child() in [confd_lib_lib(3)](../../resources/man/confd_lib_lib.3.md). Keyword arguments: * parent -- the parent CsNode * xmltag -- the child node -### find\_cs\_root +### find_cs_root ```python find_cs_root(ns) -> Union[CsNode, None] ``` -When schema information is available to the library, this function returns the root of the tree representaton of the namespace given by ns for the (first) toplevel node. For namespaces that are augmented into other namespaces such that they do not have a toplevel node, this function returns None - the nodes of such a namespace are found below the augment target node(s) in other tree(s). +When schema information is available to the library, this function returns +the root of the tree representaton of the namespace given by ns for the +(first) toplevel node. For namespaces that are augmented into other +namespaces such that they do not have a toplevel node, this function returns +None - the nodes of such a namespace are found below the augment target +node(s) in other tree(s). Keyword arguments: * ns -- the namespace id -### find\_ns\_type +### find_ns_type ```python find_ns_type(nshash, name) -> Union[CsType, None] ``` -Returns a CsType type definition for the type named name, which is defined in the namespace identified by nshash, or None if the type could not be found. If nshash is 0, the type name will be looked up among the built-in types (i.e. the YANG built-in types, the types defined in the YANG "tailf-common" module, and the types defined in the "confd" and "xs" namespaces). +Returns a CsType type definition for the type named name, which is defined +in the namespace identified by nshash, or None if the type could not be +found. If nshash is 0, the type name will be looked up among the built-in +types (i.e. the YANG built-in types, the types defined in the YANG +"tailf-common" module, and the types defined in the "confd" and "xs" +namespaces). Keyword arguments: * nshash -- a namespace hash or 0 (0 searches for built-in types) * name -- the name of the type -### get\_leaf\_list\_type +### get_leaf_list_type ```python get_leaf_list_type(node) -> CsType ``` -For a leaf-list node, the type() method in the CsNodeInfo identifies a "list type" for the leaf-list "itself". This function returns the type of the elements in the leaf-list, i.e. corresponding to the type substatement for the leaf-list in the YANG module. +For a leaf-list node, the type() method in the CsNodeInfo identifies a +"list type" for the leaf-list "itself". This function returns the type +of the elements in the leaf-list, i.e. corresponding to the type +substatement for the leaf-list in the YANG module. Keyword arguments: * node -- The CsNode of the leaf-list -### get\_nslist +### get_nslist ```python get_nslist() -> list ``` -Provides a list of the namespaces known to the library as a list of five-tuples. Each tuple contains the the namespace hash (int), the prefix (string), the namespace uri (string), the revision (string), and the module name (string). +Provides a list of the namespaces known to the library as a list of +five-tuples. Each tuple contains the the namespace hash (int), the prefix +(string), the namespace uri (string), the revision (string), and the +module name (string). If schemas are not loaded an empty list will be returned. @@ -147,13 +176,15 @@ If schemas are not loaded an empty list will be returned. hash2str(hash) -> Union[str, None] ``` -Returns a string representing the node name given by hash, or None if the hash value is not found. Requires that schema information has been loaded from the NCS daemon into the library - otherwise it always returns None. +Returns a string representing the node name given by hash, or None if the +hash value is not found. Requires that schema information has been loaded +from the NCS daemon into the library - otherwise it always returns None. Keyword arguments: * hash -- a hash -### hkeypath\_dup +### hkeypath_dup ```python hkeypath_dup(hkeypath) -> HKeypathRef @@ -165,7 +196,7 @@ Keyword arguments: * hkeypath -- a HKeypathRef instance -### hkeypath\_dup\_len +### hkeypath_dup_len ```python hkeypath_dup_len(hkeypath, len) -> HKeypathRef @@ -178,26 +209,31 @@ Keyword arguments: * hkeypath -- a HKeypathRef instance * len -- number of elements to include in the copy -### hkp\_prefix\_tagmatch +### hkp_prefix_tagmatch ```python hkp_prefix_tagmatch(hkeypath, tags) -> bool ``` -A simplified version of hkp\_tagmatch() - it returns True if the tagpath matches a prefix of the hkeypath, i.e. it is equivalent to calling hkp\_tagmatch() and checking if the return value includes CONFD\_HKP\_MATCH\_TAGS. +A simplified version of hkp_tagmatch() - it returns True if the tagpath +matches a prefix of the hkeypath, i.e. it is equivalent to calling +hkp_tagmatch() and checking if the return value includes CONFD_HKP_MATCH_TAGS. Keyword arguments: * hkeypath -- a HKeypathRef instance * tags -- a list of XmlTag instances -### hkp\_tagmatch +### hkp_tagmatch ```python hkp_tagmatch(hkeypath, tags) -> int ``` -When checking the hkeypaths that get passed into each iteration in e.g. cdb\_diff\_iterate() we can either explicitly check the paths, or use this function to do the job. The tags list (typically statically initialized) specifies a tagpath to match against the hkeypath. See cdb\_diff\_match(). +When checking the hkeypaths that get passed into each iteration in e.g. +cdb_diff_iterate() we can either explicitly check the paths, or use this +function to do the job. The tags list (typically statically initialized) +specifies a tagpath to match against the hkeypath. See cdb_diff_match(). Keyword arguments: @@ -210,7 +246,9 @@ Keyword arguments: init(name, file, level) -> None ``` -Initializes the ConfD library. Must be called before any other NCS API functions are called. There should be no need to call this function directly. It is called internally when the Python module is loaded. +Initializes the ConfD library. Must be called before any other NCS API +functions are called. There should be no need to call this function +directly. It is called internally when the Python module is loaded. Keyword arguments: @@ -218,7 +256,7 @@ Keyword arguments: * file -- (optional) * level -- (optional) -### internal\_connect +### internal_connect ```python internal_connect(id, sock, ip, port, path) -> None @@ -226,55 +264,67 @@ internal_connect(id, sock, ip, port, path) -> None Internal function used by NCS Python VM. -### list\_filter\_type2str +### list_filter_type2str ```python list_filter_type2str(op) -> str ``` -Convert confd\_list\_filter\_type value to a string. +Convert confd_list_filter_type value to a string. Keyword arguments: -* type -- confd\_list\_filter\_type integer value +* type -- confd_list_filter_type integer value -### max\_object\_size +### max_object_size ```python max_object_size(object) -> int ``` -Utility function which returns the maximum size (i.e. the needed length of the confd\_value\_t array) for an "object" retrieved by cdb\_get\_object(), maapi\_get\_object(), and corresponding multi-object functions. +Utility function which returns the maximum size (i.e. the needed length of +the confd_value_t array) for an "object" retrieved by cdb_get_object(), +maapi_get_object(), and corresponding multi-object functions. Keyword arguments: * object -- the CsNode -### mmap\_schemas +### mmap_schemas ```python mmap_schemas(filename) -> None ``` -If shared memory schema support has been enabled, this function will will map a shared memory segment into the current process address space and make it ready for use. +If shared memory schema support has been enabled, this function will +will map a shared memory segment into the current process address space +and make it ready for use. -The filename can be obtained by using the get\_schema\_file\_path() function +The filename can be obtained by using the get_schema_file_path() function -The filename argument specifies the pathname of the file that is used as backing store. +The filename argument specifies the pathname of the file that is used as +backing store. Keyword arguments: * filename -- a filename string -### next\_object\_node +### next_object_node ```python next_object_node(object, cur, value) -> Union[CsNode, None] ``` -Utility function to allow navigation of the confd\_cs\_node schema tree in parallel with the confd\_value\_t array populated by cdb\_get\_object(), maapi\_get\_object(), and corresponding multi-object functions. +Utility function to allow navigation of the confd_cs_node schema tree in +parallel with the confd_value_t array populated by cdb_get_object(), +maapi_get_object(), and corresponding multi-object functions. -The cur parameter is the CsNode for the current value, and the value parameter is the current value in the array. The function returns a CsNode for the next value in the array, or None when the complete object has been traversed. In the initial call for a given traversal, we must pass self.children() for the cur parameter - this always points to the CsNode for the first value in the array. +The cur parameter is the CsNode for the current value, and the value +parameter is the current value in the array. The function returns a CsNode +for the next value in the array, or None when the complete object has been +traversed. In the initial call for a given traversal, we must pass +self.children() for the cur parameter - this always points to the CsNode +for the first value in the array. Keyword arguments: @@ -288,38 +338,42 @@ Keyword arguments: ns2prefix(ns) -> Union[str, None] ``` -Returns a string giving the namespace prefix for the namespace ns, if the namespace is known to the library - otherwise it returns None. +Returns a string giving the namespace prefix for the namespace ns, if the +namespace is known to the library - otherwise it returns None. Keyword arguments: * ns -- a namespace hash -### pp\_kpath +### pp_kpath ```python pp_kpath(hkeypath) -> str ``` -Utility function which pretty prints a string representation of the path hkeypath. This will use the NCS curly brace notation, i.e. "/servers/server{www}/ip". Requires that schema information is available to the library. +Utility function which pretty prints a string representation of the path +hkeypath. This will use the NCS curly brace notation, i.e. +"/servers/server{www}/ip". Requires that schema information is available +to the library. Keyword arguments: * hkeypath -- a HKeypathRef instance -### pp\_kpath\_len +### pp_kpath_len ```python pp_kpath_len(hkeypath, len) -> str ``` -A variant of pp\_kpath() that prints only the first len elements of hkeypath. +A variant of pp_kpath() that prints only the first len elements of hkeypath. Keyword arguments: -* hkeypath -- a \_lib.HKeypathRef instance +* hkeypath -- a _lib.HKeypathRef instance * len -- number of elements to print -### set\_debug +### set_debug ```python set_debug(level, file) -> None @@ -332,13 +386,14 @@ Keyword arguments: * file -- (optional) * level -- (optional) -### set\_kill\_child\_on\_parent\_exit +### set_kill_child_on_parent_exit ```python set_kill_child_on_parent_exit() -> bool ``` -Instruct the operating system to kill this process if the parent process exits. +Instruct the operating system to kill this process if the parent process +exits. ### str2hash @@ -346,13 +401,15 @@ Instruct the operating system to kill this process if the parent process exits. str2hash(str) -> int ``` -Returns the hash value representing the node name given by str, or 0 if the string is not found. Requires that schema information has been loaded from the NCS daemon into the library - otherwise it always returns 0. +Returns the hash value representing the node name given by str, or 0 if the +string is not found. Requires that schema information has been loaded from +the NCS daemon into the library - otherwise it always returns 0. Keyword arguments: * str -- a name string -### stream\_connect +### stream_connect ```python stream_connect(sock, id, flags, ip, port, path) -> None @@ -365,27 +422,31 @@ Keyword arguments: * sock -- a Python socket instance * id -- id * flags -- flags -* ip -- ip address - if sock family is AF\_INET or AF\_INET6 (optional) -* port -- port - if sock family is AF\_INET or AF\_INET6 (optional) -* path -- a filename - if sock family is AF\_UNIX (optional) +* ip -- ip address - if sock family is AF_INET or AF_INET6 (optional) +* port -- port - if sock family is AF_INET or AF_INET6 (optional) +* path -- a filename - if sock family is AF_UNIX (optional) -### xpath\_pp\_kpath +### xpath_pp_kpath ```python xpath_pp_kpath(hkeypath) -> str ``` -Utility function which pretty prints a string representation of the path hkeypath. This will format the path as an XPath, i.e. "/servers/server\[name="www"']/ip". Requires that schema information is available to the library. +Utility function which pretty prints a string representation of the path +hkeypath. This will format the path as an XPath, i.e. +"/servers/server[name="www"']/ip". Requires that schema information is +available to the library. Keyword arguments: * hkeypath -- a HKeypathRef instance + ## Classes ### _class_ **AttrValue** -This type represents the c-type confd\_attr\_value\_t. +This type represents the c-type confd_attr_value_t. The contructor for this type has the following signature: @@ -416,7 +477,7 @@ attribute value (Value) ### _class_ **AuthorizationInfo** -This type represents the c-type struct confd\_authorization\_info. +This type represents the c-type struct confd_authorization_info. AuthorizationInfo cannot be directly instantiated from Python. @@ -432,7 +493,7 @@ authorization groups (list of strings) ### _class_ **CsCase** -This type represents the c-type struct confd\_cs\_case. +This type represents the c-type struct confd_cs_case. CsCase cannot be directly instantiated from Python. @@ -538,7 +599,7 @@ Returns the CsCase tag hash. ### _class_ **CsChoice** -This type represents the c-type struct confd\_cs\_choice. +This type represents the c-type struct confd_cs_choice. CsChoice cannot be directly instantiated from Python. @@ -658,7 +719,7 @@ Returns the CsChoice tag hash. ### _class_ **CsNode** -This type represents the c-type struct confd\_cs\_node. +This type represents the c-type struct confd_cs_node. CsNode cannot be directly instantiated from Python. @@ -1044,7 +1105,7 @@ Returns the tag value. ### _class_ **CsNodeInfo** -This type represents the c-type struct confd\_cs\_node\_info. +This type represents the c-type struct confd_cs_node_info. CsNodeInfo cannot be directly instantiated from Python. @@ -1130,7 +1191,7 @@ Method: max_occurs() -> int ``` -Returns CsNodeInfo max\_occurs. +Returns CsNodeInfo max_occurs.
@@ -1144,7 +1205,7 @@ Method: meta_data() -> Union[Dict, None] ``` -Returns CsNodeInfo meta\_data. +Returns CsNodeInfo meta_data. @@ -1158,7 +1219,7 @@ Method: min_occurs() -> int ``` -Returns CsNodeInfo min\_occurs. +Returns CsNodeInfo min_occurs. @@ -1172,7 +1233,7 @@ Method: shallow_type() -> int ``` -Returns CsNodeInfo shallow\_type. +Returns CsNodeInfo shallow_type. @@ -1192,7 +1253,7 @@ Returns CsNodeInfo type. ### _class_ **CsType** -This type represents the c-type struct confd\_type. +This type represents the c-type struct confd_type. CsType cannot be directly instantiated from Python. @@ -1208,7 +1269,10 @@ Method: bitbig_size() -> int ``` -Returns the maximum size needed for the byte array for the BITBIG value when a YANG bits type has a highest position above 63. If this is not a BITBIG value or if the highest position is 63 or less, this function will return 0. +Returns the maximum size needed for the byte array for the BITBIG value +when a YANG bits type has a highest position above 63. If this is not a +BITBIG value or if the highest position is 63 or less, this function will +return 0. @@ -1242,11 +1306,12 @@ Returns the CsType parent. ### _class_ **DateTime** -This type represents the c-type struct confd\_datetime. +This type represents the c-type struct confd_datetime. The contructor for this type has the following signature: -DateTime(year, month, day, hour, min, sec, micro, timezone, timezone\_minutes) -> object +DateTime(year, month, day, hour, min, sec, micro, timezone, + timezone_minutes) -> object Keyword arguments: @@ -1258,7 +1323,7 @@ Keyword arguments: * sec -- seconds (int) * micro -- micro seconds (int) * timezone -- the timezone (int) -* timezone\_minutes -- number of timezone\_minutes (int) +* timezone_minutes -- number of timezone_minutes (int) Members: @@ -1336,27 +1401,33 @@ the year ### _class_ **HKeypathRef** -This type represents the c-type confd\_hkeypath\_t. +This type represents the c-type confd_hkeypath_t. -HKeypathRef implements some sequence methods which enables indexing, iteration and length checking. There is also support for slicing, e.g: +HKeypathRef implements some sequence methods which enables indexing, +iteration and length checking. There is also support for slicing, e.g: -Lets say the variable hkp is a valid hkeypath pointing to '/foo/bar{a}/baz' and we slice that object like this: +Lets say the variable hkp is a valid hkeypath pointing to '/foo/bar{a}/baz' +and we slice that object like this: -``` -newhkp = hkp[1:] -``` + newhkp = hkp[1:] -In this case newhkp will be a new hkeypath pointing to '/foo/bar{a}'. Note that the last element must always be included, so trying to create a slice with hkp\[1:2] will fail. +In this case newhkp will be a new hkeypath pointing to '/foo/bar{a}'. +Note that the last element must always be included, so trying to create +a slice with hkp[1:2] will fail. -The example above could also be written using the dup\_len() method: +The example above could also be written using the dup_len() method: -``` -newhkp = hkp.dup_len(3) -``` + newhkp = hkp.dup_len(3) -Retrieving an element of the HKeypathRef when the underlying Value is of type C\_XMLTAG returns a XmlTag instance. In all other cases a tuple of Values is returned. +Retrieving an element of the HKeypathRef when the underlying Value is of +type C_XMLTAG returns a XmlTag instance. In all other cases a tuple of +Values is returned. -When receiving an HKeypathRef object as on argument in a callback method, the underlying object is only borrowed, so this particular instance is only valid inside that callback method. If one, for some reason, would like to keep the HKeypathRef object 'alive' for any longer than that, use dup() or dup\_len() to get a copy of it. Slicing also creates a copy. +When receiving an HKeypathRef object as on argument in a callback method, +the underlying object is only borrowed, so this particular instance is only +valid inside that callback method. If one, for some reason, would like +to keep the HKeypathRef object 'alive' for any longer than that, use +dup() or dup_len() to get a copy of it. Slicing also creates a copy. HKeypathRef cannot be directly instantiated from Python. @@ -1396,7 +1467,7 @@ Keyword arguments: ### _class_ **ProgressLink** -This type represents the c-type struct confd\_progress\_link. +This type represents the c-type struct confd_progress_link. confdProgressLink cannot be directly instantiated from Python. @@ -1420,9 +1491,10 @@ trace id (string) ### _class_ **QueryResult** -This type represents the c-type struct confd\_query\_result. +This type represents the c-type struct confd_query_result. -QueryResult implements some sequence methods which enables indexing, iteration and length checking. +QueryResult implements some sequence methods which enables indexing, +iteration and length checking. QueryResult cannot be directly instantiated from Python. @@ -1462,7 +1534,7 @@ the query result type (int) ### _class_ **SnmpVarbind** -This type represents the c-type struct confd\_snmp\_varbind. +This type represents the c-type struct confd_snmp_varbind. The contructor for this type has the following signature: @@ -1470,14 +1542,15 @@ SnmpVarbind(type, val, vartype, name, oid, cr) -> object Keyword arguments: -* type -- SNMP\_VARIABLE, SNMP\_OID or SNMP\_COL\_ROW (int) +* type -- SNMP_VARIABLE, SNMP_OID or SNMP_COL_ROW (int) * val -- value (Value) * vartype -- snmp type (optional) -* name -- mandatory if type is SNMP\_VARIABLE (string) -* oid -- mandatory if type is SNMP\_OID (list of integers) -* cr -- mandatory if type is SNMP\_COL\_ROW (described below) +* name -- mandatory if type is SNMP_VARIABLE (string) +* oid -- mandatory if type is SNMP_OID (list of integers) +* cr -- mandatory if type is SNMP_COL_ROW (described below) -When type is SNMP\_COL\_ROW the cr argument must be provided. It is built up as a 2-tuple like this: tuple(string, list(int)). +When type is SNMP_COL_ROW the cr argument must be provided. It is built up +as a 2-tuple like this: tuple(string, list(int)). The first element of the 2-tuple is the column name. @@ -1495,15 +1568,18 @@ the SnmpVarbind type ### _class_ **TagValue** -This type represents the c-type confd\_tag\_value\_t. +This type represents the c-type confd_tag_value_t. -In addition to the 'ns' and 'tag' attributes there is an additional attribute 'v' which containes the Value object. +In addition to the 'ns' and 'tag' attributes there is an additional +attribute 'v' which containes the Value object. The contructor for this type has the following signature: TagValue(xmltag, v, tag, ns) -> object -There are two ways to contruct this object. The first one requires that both xmltag and v are specified. The second one requires that both tag and ns are specified. +There are two ways to contruct this object. The first one requires that both +xmltag and v are specified. The second one requires that both tag and ns are +specified. Keyword arguments: @@ -1532,18 +1608,20 @@ tag hash ### _class_ **TransCtxRef** -This type represents the c-type struct confd\_trans\_ctx. +This type represents the c-type struct confd_trans_ctx. Available attributes: * fd -- worker socket (int) * th -- transaction handle (int) -* secondary\_index -- secondary index number for list traversal (int) +* secondary_index -- secondary index number for list traversal (int) * username -- from user session (string) DEPRECATED, see uinfo * context -- from user session (string) DEPRECATED, see uinfo * uinfo -- user session (UserInfo) -* accumulated -- if the data provider is using the accumulate functionality this attribute will contain the first dp.TrItemRef object in the linked list, otherwise if will be None -* traversal\_id -- unique id for the get\_next\* invocation +* accumulated -- if the data provider is using the accumulate functionality + this attribute will contain the first dp.TrItemRef object + in the linked list, otherwise if will be None +* traversal_id -- unique id for the get_next* invocation TransCtxRef cannot be directly instantiated from Python. @@ -1553,7 +1631,7 @@ _None_ ### _class_ **UserInfo** -This type represents the c-type struct confd\_user\_info. +This type represents the c-type struct confd_user_info. UserInfo cannot be directly instantiated from Python. @@ -1563,7 +1641,7 @@ Members: actx_thandle -actx\_thandle -- action context transaction handle +actx_thandle -- action context transaction handle @@ -1579,7 +1657,7 @@ addr -- ip address (string) af -af -- address family AF\_INIT or AF\_INET6 (int) +af -- address family AF_INIT or AF_INET6 (int) @@ -1603,7 +1681,7 @@ context -- the context (string) flags -flags -- CONFD\_USESS\_FLAG\_... (int) +flags -- CONFD_USESS_FLAG_... (int) @@ -1643,7 +1721,7 @@ proto -- protocol (int) snmp_v3_ctx -snmp\_v3\_ctx -- SNMP context (string) +snmp_v3_ctx -- SNMP context (string) @@ -1665,38 +1743,44 @@ usid -- user session id (int) ### _class_ **Value** -This type represents the c-type confd\_value\_t. +This type represents the c-type confd_value_t. The contructor for this type has the following signature: Value(init, type) -> object -If type is not provided it will be automatically set by inspecting the type of argument init according to this table: +If type is not provided it will be automatically set by inspecting the type +of argument init according to this table: -| Python type | Value type | -| ----------- | ---------- | -| bool | C\_BOOL | -| int | C\_INT32 | -| long | C\_INT64 | -| float | C\_DOUBLE | -| string | C\_BUF | +Python type | Value type +-----------------|------------ +bool | C_BOOL +int | C_INT32 +long | C_INT64 +float | C_DOUBLE +string | C_BUF -If any other type is provided for the init argument, the type will be set to C\_BUF and the value will be the string representation of init. +If any other type is provided for the init argument, the type will be set to +C_BUF and the value will be the string representation of init. -For types C\_XMLTAG, C\_XMLBEGIN and C\_XMLEND the init argument must be a 2-tuple which specifies the ns and tag values like this: (ns, tag). +For types C_XMLTAG, C_XMLBEGIN and C_XMLEND the init argument must be a +2-tuple which specifies the ns and tag values like this: (ns, tag). -For type C\_IDENTITYREF the init argument must be a 2-tuple which specifies the ns and id values like this: (ns, id). +For type C_IDENTITYREF the init argument must be a +2-tuple which specifies the ns and id values like this: (ns, id). -For types C\_IPV4, C\_IPV6, C\_DATETIME, C\_DATE, C\_TIME, C\_DURATION, C\_OID, C\_IPV4PREFIX and C\_IPV6PREFIX, the init argument must be a string. +For types C_IPV4, C_IPV6, C_DATETIME, C_DATE, C_TIME, C_DURATION, C_OID, +C_IPV4PREFIX and C_IPV6PREFIX, the init argument must be a string. -For type C\_DECIMAL64 the init argument must be a string, or a 2-tuple which specifies value and fraction digits like this: (value, fraction\_digits). +For type C_DECIMAL64 the init argument must be a string, or a 2-tuple which +specifies value and fraction digits like this: (value, fraction_digits). -For type C\_BINARY the init argument must be a bytes instance. +For type C_BINARY the init argument must be a bytes instance. Keyword arguments: * init -- the initial value -* type -- type (optional, see confd\_types(3)) +* type -- type (optional, see confd_types(3)) Members: @@ -1710,7 +1794,8 @@ Method: as_decimal64() -> Tuple[int, int] ``` -Returns a tuple containing (value, fraction\_digits) if this value is of type C\_DECIMAL64. +Returns a tuple containing (value, fraction_digits) if this value is of +type C_DECIMAL64. @@ -1724,7 +1809,7 @@ Method: as_list() -> list ``` -Returns a list of Value's if this value is of type C\_LIST. +Returns a list of Value's if this value is of type C_LIST. @@ -1738,11 +1823,15 @@ Method: as_pyval() -> Any ``` -Tries to convert a Value to a native Python type. If possible the object returned will be of the same type as used when initializing a Value object. If the type cannot be represented as something useful in Python a string will be returned. Note that not all Value types are supported. +Tries to convert a Value to a native Python type. If possible the object +returned will be of the same type as used when initializing a Value object. +If the type cannot be represented as something useful in Python a string +will be returned. Note that not all Value types are supported. -E.g. assuming you already have a value object, this should be possible in most cases: +E.g. assuming you already have a value object, this should be possible +in most cases: -newvalue = Value(value.as\_pyval(), value.confd\_type()) + newvalue = Value(value.as_pyval(), value.confd_type()) @@ -1756,7 +1845,7 @@ Method: as_xmltag() -> XmlTag ``` -Returns a XmlTag instance if this value is of type C\_XMLTAG. +Returns a XmlTag instance if this value is of type C_XMLTAG. @@ -1799,12 +1888,14 @@ str2val(value, schema_type) -> Value (class method) ``` -Create and return a Value from a string. The schema\_type argument must be either a 2-tuple with namespace and keypath, a CsNode instance or a CsType instance. +Create and return a Value from a string. The schema_type argument must be +either a 2-tuple with namespace and keypath, a CsNode instance or a CsType +instance. Keyword arguments: * value -- string value -* schema\_type -- either (ns, keypath), a CsNode or a CsType +* schema_type -- either (ns, keypath), a CsNode or a CsType @@ -1818,17 +1909,19 @@ Method: val2str(schema_type) -> str ``` -Return a string representation of Value. The schema\_type argument must be either a 2-tuple with namespace and keypath, a CsNode instance or a CsType instance. +Return a string representation of Value. The schema_type argument must be +either a 2-tuple with namespace and keypath, a CsNode instance or a CsType +instance. Keyword arguments: -* schema\_type -- either (ns, keypath), a CsNode or a CsType +* schema_type -- either (ns, keypath), a CsNode or a CsType ### _class_ **XmlTag** -This type represent the c-type struct xml\_tag. +This type represent the c-type struct xml_tag. The contructor for this type has the following signature: @@ -1984,6 +2077,7 @@ ERR_BADSTATE = 17 ERR_BADTYPE = 5 ERR_BAD_CONFIG = 36 ERR_BAD_KEYREF = 14 +ERR_BAD_PAYLOAD = 72 ERR_CLI_CMD = 59 ERR_DATA_MISSING = 58 ERR_EOF = 45 @@ -2162,6 +2256,18 @@ TRACE = 2 TRANSACTION = 5 TRANS_CB_FLAG_FILTERED = 1 TRUE = 1 +TYPE_BITS = 3 +TYPE_DECIMAL64 = 4 +TYPE_DISPLAY_HINT = 10 +TYPE_ENUM = 1 +TYPE_IDENTITY = 11 +TYPE_IDREF = 2 +TYPE_LIST = 6 +TYPE_LIST_RESTR = 9 +TYPE_NONE = 0 +TYPE_NUMBER = 7 +TYPE_STRING = 8 +TYPE_UNION = 5 USESS_FLAG_FORWARD = 1 USESS_FLAG_HAS_IDENTIFICATION = 2 USESS_FLAG_HAS_OPAQUE = 4 diff --git a/developer-reference/pyapi/index.md b/developer-reference/pyapi/index.md new file mode 100644 index 00000000..338d260f --- /dev/null +++ b/developer-reference/pyapi/index.md @@ -0,0 +1,25 @@ +# Python API Reference + +Documentation for Python modules, generated from module source: + +- [ncs](ncs.md): NCS Python high level module. +- [ncs.alarm](ncs.alarm.md): NCS Alarm Manager module. +- [ncs.application](ncs.application.md): Module for building NCS applications. +- [ncs.cdb](ncs.cdb.md): CDB high level module. +- [ncs.dp](ncs.dp.md): Callback module for connecting data providers to ConfD/NCS. +- [ncs.experimental](ncs.experimental.md): Experimental stuff. +- [ncs.log](ncs.log.md): This module provides some logging utilities. +- [ncs.maagic](ncs.maagic.md): Confd/NCS data access module. +- [ncs.maapi](ncs.maapi.md): MAAPI high level module. +- [ncs.progress](ncs.progress.md): MAAPI progress trace high level module. +- [ncs.service_log](ncs.service_log.md): This module provides service logging +- [ncs.template](ncs.template.md): This module implements classes to simplify template processing. +- [ncs.util](ncs.util.md): Utility module, low level abstrations +- [_ncs](_ncs.md): NCS Python low level module. +- [_ncs.cdb](_ncs.cdb.md): Low level module for connecting to NCS built-in XML database (CDB). +- [_ncs.dp](_ncs.dp.md): Low level callback module for connecting data providers to NCS. +- [_ncs.error](_ncs.error.md): This module defines new NCS Python API exception classes. +- [_ncs.events](_ncs.events.md): Low level module for subscribing to NCS event notifications. +- [_ncs.ha](_ncs.ha.md): Low level module for connecting to NCS HA subsystem. +- [_ncs.maapi](_ncs.maapi.md): Low level module for connecting to NCS with a read/write interface +inside transactions. diff --git a/developer-reference/pyapi/ncs.cdb.md b/developer-reference/pyapi/ncs.cdb.md index 22c241a2..bc09c919 100644 --- a/developer-reference/pyapi/ncs.cdb.md +++ b/developer-reference/pyapi/ncs.cdb.md @@ -135,7 +135,7 @@ called terminates -- either normally or through an unhandled exception or until the optional timeout occurs. When the timeout argument is present and not None, it should be a -floating point number specifying a timeout for the operation in seconds +floating-point number specifying a timeout for the operation in seconds (or fractions thereof). As join() always returns None, you must call is_alive() after join() to decide whether a timeout happened -- if the thread is still alive, the join() call timed out. @@ -489,7 +489,7 @@ called terminates -- either normally or through an unhandled exception or until the optional timeout occurs. When the timeout argument is present and not None, it should be a -floating point number specifying a timeout for the operation in seconds +floating-point number specifying a timeout for the operation in seconds (or fractions thereof). As join() always returns None, you must call is_alive() after join() to decide whether a timeout happened -- if the thread is still alive, the join() call timed out. @@ -810,7 +810,7 @@ called terminates -- either normally or through an unhandled exception or until the optional timeout occurs. When the timeout argument is present and not None, it should be a -floating point number specifying a timeout for the operation in seconds +floating-point number specifying a timeout for the operation in seconds (or fractions thereof). As join() always returns None, you must call is_alive() after join() to decide whether a timeout happened -- if the thread is still alive, the join() call timed out. diff --git a/developer-reference/pyapi/ncs.dp.md b/developer-reference/pyapi/ncs.dp.md index 99100623..5591f273 100644 --- a/developer-reference/pyapi/ncs.dp.md +++ b/developer-reference/pyapi/ncs.dp.md @@ -363,7 +363,7 @@ called terminates -- either normally or through an unhandled exception or until the optional timeout occurs. When the timeout argument is present and not None, it should be a -floating point number specifying a timeout for the operation in seconds +floating-point number specifying a timeout for the operation in seconds (or fractions thereof). As join() always returns None, you must call is_alive() after join() to decide whether a timeout happened -- if the thread is still alive, the join() call timed out. @@ -1232,7 +1232,6 @@ NCS_UNKNOWN_NED_IDS_COMPLIANCE_TEMPLATE = 124 NCS_UNKNOWN_NED_ID_DEVICE_TEMPLATE = 106 NCS_XML_PARSE = 11 NCS_YANGLIB_NO_SCHEMA_FOR_RUNNING = 114 -OPERATION_CASE_EXISTS = 13 PATCH_FLAG_AAA_CHECKED = 8 PATCH_FLAG_BUFFER_DAMPENED = 2 PATCH_FLAG_FILTER = 4 diff --git a/developer-reference/pyapi/ncs.maagic.md b/developer-reference/pyapi/ncs.maagic.md index 26964e75..67c4506a 100644 --- a/developer-reference/pyapi/ncs.maagic.md +++ b/developer-reference/pyapi/ncs.maagic.md @@ -6,6 +6,21 @@ This module implements classes and function for easy access to the data store. There is no need to manually instantiate any of the classes herein. The only functions that should be used are cd(), get_node() and get_root(). +Node Comparison in NSO 6.1.17+ (May 2025-): +------------------------------------------ + +In NSO 6.1.17, 6.2.12, 6.3.9, 6.4.5, 6.5.1 and 6.6 node object caching changed, +due to excessive memory usage. This change broke services that use +node == comparisons. Use get_node_path() for reliable node identification: + + from ncs.maagic import get_node_path + + # Instead of: device1 == device2 + # Use: get_node_path(device1) == get_node_path(device2) + + # Dictionary keys: + device_cache = {get_node_path(device): data} + ## Functions ### as_pyval @@ -137,6 +152,27 @@ Example use: node = ncs.maagic.get_node(t, '/ncs:devices/device{ce0}') +### get_node_path + +```python +get_node_path(node) +``` + +Get the keypath of a maagic node. + +Provides reliable node identification across NSO versions where object +caching behavior has changed. + +Arguments: +* node -- the maagic node (maagic.Node) + +Returns: +* keypath of the node as a string (str or None) + +Example: + if get_node_path(device1) == get_node_path(device2): + print("Same device") + ### get_root ```python diff --git a/developer-reference/pyapi/ncs.maapi.md b/developer-reference/pyapi/ncs.maapi.md index a355f86f..a2778e68 100644 --- a/developer-reference/pyapi/ncs.maapi.md +++ b/developer-reference/pyapi/ncs.maapi.md @@ -1584,6 +1584,41 @@ Returns:
+get_template_variables(...) + +Method: + +```python +get_template_variables(self, name, type_enum) +``` + +Get template variables for specific types. + +
+ +
+ +get_trans_mode(...) + +Method: + +```python +get_trans_mode(self, th) +``` + +Get transaction mode for a transaction handle. + +Arguments: +* th -- a transaction handle. + +Returns: + +* Either READ or READ_WRITE flag (ncs) or -1 (no transaction). + +
+ +
+ ip _Readonly property_ @@ -2395,6 +2430,68 @@ Close the user session.
+### _class_ **TemplateTypes** + +Enumeration for template types: +DEVICE_TEMPLATE = 0 +SERVICE_TEMPLATE = 1 +COMPLIANCE_TEMPLATE = 2 + +```python +TemplateTypes(*values) +``` + +Members: + +
+ +COMPLIANCE_TEMPLATE + +```python +COMPLIANCE_TEMPLATE = 2 +``` + + +
+ +
+ +DEVICE_TEMPLATE + +```python +DEVICE_TEMPLATE = 0 +``` + + +
+ +
+ +SERVICE_TEMPLATE + +```python +SERVICE_TEMPLATE = 1 +``` + + +
+ +
+ +name + +The name of the Enum member. + +
+ +
+ +value + +The value of the Enum member. + +
+ ### _class_ **Transaction** Class that corresponds to a single MAAPI transaction. diff --git a/developer-reference/pyapi/ncs.md b/developer-reference/pyapi/ncs.md index 81e4b1b5..c2ff4782 100644 --- a/developer-reference/pyapi/ncs.md +++ b/developer-reference/pyapi/ncs.md @@ -172,6 +172,7 @@ ERR_BADSTATE = 17 ERR_BADTYPE = 5 ERR_BAD_CONFIG = 36 ERR_BAD_KEYREF = 14 +ERR_BAD_PAYLOAD = 72 ERR_CLI_CMD = 59 ERR_DATA_MISSING = 58 ERR_EOF = 45 @@ -350,6 +351,18 @@ TRACE = 2 TRANSACTION = 5 TRANS_CB_FLAG_FILTERED = 1 TRUE = 1 +TYPE_BITS = 3 +TYPE_DECIMAL64 = 4 +TYPE_DISPLAY_HINT = 10 +TYPE_ENUM = 1 +TYPE_IDENTITY = 11 +TYPE_IDREF = 2 +TYPE_LIST = 6 +TYPE_LIST_RESTR = 9 +TYPE_NONE = 0 +TYPE_NUMBER = 7 +TYPE_STRING = 8 +TYPE_UNION = 5 USESS_FLAG_FORWARD = 1 USESS_FLAG_HAS_IDENTIFICATION = 2 USESS_FLAG_HAS_OPAQUE = 4 diff --git a/development/advanced-development/developing-neds/cli-ned-development.md b/development/advanced-development/developing-neds/cli-ned-development.md index a15c643b..33a7cb1a 100644 --- a/development/advanced-development/developing-neds/cli-ned-development.md +++ b/development/advanced-development/developing-neds/cli-ned-development.md @@ -6,7 +6,7 @@ description: Create CLI NEDs. The CLI NED is a model-driven way to CLI script towards all Cisco-like devices. Some Java code is necessary for handling the corner cases a human-to-machine interface presents. -See the [examples.ncs/device-manager/cli-ned](https://github.com/NSO-developer/nso-examples/tree/6.5/device-management/cli-ned) for an example of a Java implementation serving any YANG models, including those that come with the example. +See the [examples.ncs/device-manager/cli-ned](https://github.com/NSO-developer/nso-examples/tree/6.6/device-management/cli-ned) for an example of a Java implementation serving any YANG models, including those that come with the example. The NSO CLI NED southbound of NSO shares a Cisco-style CLI engine with the northbound NSO CLI interface, and the CLI engine can thus run in both directions, producing CLI southbound and interpreting CLI data coming from southbound while presenting a CLI interface northbound. It is helpful to keep this in mind when learning and working with CLI NEDs. diff --git a/development/advanced-development/developing-neds/generic-ned-development.md b/development/advanced-development/developing-neds/generic-ned-development.md index 142e6189..6a4a394f 100644 --- a/development/advanced-development/developing-neds/generic-ned-development.md +++ b/development/advanced-development/developing-neds/generic-ned-development.md @@ -35,7 +35,7 @@ state admin-state unlocked ... ``` -The [examples.ncs/device-management/](https://github.com/NSO-developer/nso-examples/tree/6.5/device-management/xmlrpc-device)[generic-xmlrpc-ned](https://github.com/NSO-developer/nso-examples/tree/6.5/device-management/generic-xmlrpc-ned) example in the NSO examples collection implements a generic NED that speaks XML-RPC to 3 HTTP servers. The HTTP servers run the Apache XML-RPC server code and the NED code manipulates the 3 HTTP servers using a number of predefined XML RPC calls. +The [examples.ncs/device-management/](https://github.com/NSO-developer/nso-examples/tree/6.6/device-management/xmlrpc-device)[generic-xmlrpc-ned](https://github.com/NSO-developer/nso-examples/tree/6.6/device-management/generic-xmlrpc-ned) example in the NSO examples collection implements a generic NED that speaks XML-RPC to 3 HTTP servers. The HTTP servers run the Apache XML-RPC server code and the NED code manipulates the 3 HTTP servers using a number of predefined XML RPC calls. A good starting point when we wish to implement a new generic NED is the `ncs-make-package --generic-ned-skeleton ...` command, which is used to generate a skeleton package for a generic NED. @@ -83,7 +83,7 @@ Often a useful technique with generic NEDs can be to write a pyang plugin to gen Pyang is an extensible and open-source YANG parser (written by Tail-f) available at `http://www.yang-central.org`. pyang is also part of the NSO release. A number of plugins are shipped in the NSO release, for example `$NCS_DIR/lib/pyang/pyang/plugins/tree.py` is a good plugin to start with if we wish to write our own plugin. -The [examples.ncs/device-management/](https://github.com/NSO-developer/nso-examples/tree/6.5/device-management/xmlrpc-device)[generic-xmlrpc-ned](https://github.com/NSO-developer/nso-examples/tree/6.5/device-management/generic-xmlrpc-ned) example is a good example to start with if we wish to write a generic NED. It manages a set of devices over the XML-RPC protocol. In this example, we have: +The [examples.ncs/device-management/](https://github.com/NSO-developer/nso-examples/tree/6.6/device-management/xmlrpc-device)[generic-xmlrpc-ned](https://github.com/NSO-developer/nso-examples/tree/6.6/device-management/generic-xmlrpc-ned) example is a good example to start with if we wish to write a generic NED. It manages a set of devices over the XML-RPC protocol. In this example, we have: * Defined a fictitious YANG model for the device. * Implemented an XML-RPC server exporting a set of RPCs to manipulate that fictitious data model. The XML-RPC server runs the Apache `org.apache.xmlrpc.server.XmlRpcServer` Java package. @@ -161,7 +161,7 @@ A device we wish to manage using a NED usually has not just configuration data t The commands on the device we wish to be able to invoke from NSO must be modeled as actions. We model this as actions and compile it using a special `ncsc` command to compile NED data models that do not directly relate to configuration data on the device. -The [examples.ncs/device-management/](https://github.com/NSO-developer/nso-examples/tree/6.5/device-management/xmlrpc-device)[generic-xmlrpc-ned](https://github.com/NSO-developer/nso-examples/tree/6.5/device-management/generic-xmlrpc-ned) example managed device, a fictitious XML-RPC device, contains a YANG snippet: +The [examples.ncs/device-management/](https://github.com/NSO-developer/nso-examples/tree/6.6/device-management/xmlrpc-device)[generic-xmlrpc-ned](https://github.com/NSO-developer/nso-examples/tree/6.6/device-management/generic-xmlrpc-ned) example managed device, a fictitious XML-RPC device, contains a YANG snippet: ```yang container commands { diff --git a/development/advanced-development/developing-neds/ned-upgrades-and-migration.md b/development/advanced-development/developing-neds/ned-upgrades-and-migration.md index 5b2b7f4b..2eccc65a 100644 --- a/development/advanced-development/developing-neds/ned-upgrades-and-migration.md +++ b/development/advanced-development/developing-neds/ned-upgrades-and-migration.md @@ -16,6 +16,6 @@ These features aim to lower the barrier of upgrading NEDs and significantly redu By using the `/ncs:devices/device/migrate` action, you can change the NED major/minor version of a device. The action migrates all configuration and service meta-data. The action can also be executed in parallel on a device group or on all devices matching a NED identity. The procedure for migrating devices is further described in [NED Migration](../../../administration/management/ned-administration.md#sec.ned\_migration). -Additionally, the example [examples.ncs/device-management/ned-migration](https://github.com/NSO-developer/nso-examples/tree/6.5/device-management/ned-migration) in the NSO examples collection illustrates how to migrate devices between different NED versions using the `migrate` action. +Additionally, the example [examples.ncs/device-management/ned-migration](https://github.com/NSO-developer/nso-examples/tree/6.6/device-management/ned-migration) in the NSO examples collection illustrates how to migrate devices between different NED versions using the `migrate` action. What makes it particularly useful to a service developer is that the action reports what paths have been modified and the service instances affected by those changes. This information can then be used to prepare the service code to handle the new NED version. If the `verbose` option is used, all service instances are reported instead of just the service points. If the `dry-run` option is used, the action simply reports what it would do. This gives you the chance to analyze before any actual change is performed. diff --git a/development/advanced-development/developing-neds/netconf-ned-development.md b/development/advanced-development/developing-neds/netconf-ned-development.md index a942609e..439cb2ec 100644 --- a/development/advanced-development/developing-neds/netconf-ned-development.md +++ b/development/advanced-development/developing-neds/netconf-ned-development.md @@ -17,7 +17,7 @@ Creating a NETCONF NED that uses the built-in NSO NETCONF client can be a pleasa Before NSO can manage a NETCONF-capable device, a corresponding NETCONF NED needs to be loaded. While no code needs to be written for such NED, it must contain YANG data models for this kind of device. While in some cases, the YANG models may be provided by the device's vendor, devices that implement RFC 6022 YANG Module for NETCONF Monitoring can provide their YANG models using the functionality described in this RFC. -The NSO example under [examples.ncs/device-management/netconf-ned](https://github.com/NSO-developer/nso-examples/tree/6.5/device-management/netconf-ned) implements two shell scripts that use different tools to build a NETCONF NED from a simulated hardware chassis system controller device. +The NSO example under [examples.ncs/device-management/netconf-ned](https://github.com/NSO-developer/nso-examples/tree/6.6/device-management/netconf-ned) implements two shell scripts that use different tools to build a NETCONF NED from a simulated hardware chassis system controller device. ### **The `netconf-console` and `ncs-make-package` Tools** @@ -35,7 +35,7 @@ The `demo_nb.sh` script in the `netconf-ned` example uses the NSO CLI NETCONF NE ## Using the **`netconf-console`** and **`ncs-make-package`** Combination -For a demo of the steps below, see the README in the [examples.ncs/device-management/netconf-ned](https://github.com/NSO-developer/nso-examples/tree/6.5/device-management/netconf-ned) example and run the demo.sh script. +For a demo of the steps below, see the README in the [examples.ncs/device-management/netconf-ned](https://github.com/NSO-developer/nso-examples/tree/6.6/device-management/netconf-ned) example and run the demo.sh script. ### **Make the Device YANG Data Models Available to NSO** @@ -181,11 +181,11 @@ fetch-result { result true ``` -NSO can now configure the device, state data can be read, actions can be executed, and notifications can be received. See the [examples.ncs/device-management/netconf-ned](https://github.com/NSO-developer/nso-examples/tree/6.5/device-management/netconf-ned) `demo.sh` example script for a demo. +NSO can now configure the device, state data can be read, actions can be executed, and notifications can be received. See the [examples.ncs/device-management/netconf-ned](https://github.com/NSO-developer/nso-examples/tree/6.6/device-management/netconf-ned) `demo.sh` example script for a demo. ## Using the NETCONF NED Builder Tool -For a demo of the steps below, see README in the [examples.ncs/device-management/netconf-ned](https://github.com/NSO-developer/nso-examples/tree/6.5/device-management/netconf-ned) example and run the `demo_nb.sh` script. +For a demo of the steps below, see README in the [examples.ncs/device-management/netconf-ned](https://github.com/NSO-developer/nso-examples/tree/6.6/device-management/netconf-ned) example and run the `demo_nb.sh` script. ### **Configure the Device Connection** @@ -623,7 +623,7 @@ devices device hw0 ... ``` -NSO can now configure the device, state data can be read, actions can be executed, and notifications can be received. See the [examples.ncs/device-management/netconf-ned](https://github.com/NSO-developer/nso-examples/tree/6.5/device-management/netconf-ned) `demo_nb.sh` example script for a demo. +NSO can now configure the device, state data can be read, actions can be executed, and notifications can be received. See the [examples.ncs/device-management/netconf-ned](https://github.com/NSO-developer/nso-examples/tree/6.6/device-management/netconf-ned) `demo_nb.sh` example script for a demo. ### **Remove a NED from NSO** diff --git a/development/advanced-development/developing-neds/snmp-ned.md b/development/advanced-development/developing-neds/snmp-ned.md index 5b169cb9..71037dc6 100644 --- a/development/advanced-development/developing-neds/snmp-ned.md +++ b/development/advanced-development/developing-neds/snmp-ned.md @@ -26,7 +26,7 @@ To add a device, the following steps need to be followed. They are described in ## Compiling and Loading MIBs -(See the `Makefile` in the [examples.ncs/device-management/snmp-ned](https://github.com/NSO-developer/nso-examples/tree/6.5/device-management/snmp-ned) example under `packages/ex-snmp-ned/src/Makefile`, for an example of the below description.) Make sure that you have all MIBs available, including import dependencies, and that they contain no errors. +(See the `Makefile` in the [examples.ncs/device-management/snmp-ned](https://github.com/NSO-developer/nso-examples/tree/6.6/device-management/snmp-ned) example under `packages/ex-snmp-ned/src/Makefile`, for an example of the below description.) Make sure that you have all MIBs available, including import dependencies, and that they contain no errors. The `ncsc --ncs-compile-mib-bundle` compiler is used to compile MIBs and MIB annotation files into NSO load files. Assuming a directory with input MIB files (and optional MIB annotation files) exist, the following command compiles all the MIBs in `device-models` and writes the output to `ncs-device-model-dir`. @@ -139,7 +139,7 @@ Some SNMP agents require a certain order of row deletions and creations. By defa Sometimes rows in some SNMP agents cannot be modified once created. Such rows can be marked with the annotation `ned-recreate-when-modified`. This makes the SNMP NED to first delete the row, and then immediately recreate it with the new values. -A good starting point for understanding annotations is to look at the example in the [examples.ncs/device-management/snmp-ned](https://github.com/NSO-developer/nso-examples/tree/6.5/device-management/snmp-ned) directory. The BASIC-CONFIG-MIB mib has a table where rows can be modified if the `bscActAdminState` is set to locked. To have NSO do this automatically when modifying entries rather than leaving it to users an annotation file can be created. See the `BASIC-CONFIG-MIB.miba` which contains the following: +A good starting point for understanding annotations is to look at the example in the [examples.ncs/device-management/snmp-ned](https://github.com/NSO-developer/nso-examples/tree/6.6/device-management/snmp-ned) directory. The BASIC-CONFIG-MIB mib has a table where rows can be modified if the `bscActAdminState` is set to locked. To have NSO do this automatically when modifying entries rather than leaving it to users an annotation file can be created. See the `BASIC-CONFIG-MIB.miba` which contains the following: ``` ## NCS Annotation module for BASIC-CONFIG-MIB @@ -158,7 +158,7 @@ Make sure that the MIB annotation file is put into the directory where all the M NSO can manage SNMP devices within transactions, a transaction can span Cisco devices, NETCONF devices, and SNMP devices. If a transaction fails NSO will generate the reverse operation to the SNMP device. -The basic features of the SNMP will be illustrated below by using the [examples.ncs/device-management/snmp-ned](https://github.com/NSO-developer/nso-examples/tree/6.5/device-management/snmp-ned) example. First, try to connect to all SNMP devices: +The basic features of the SNMP will be illustrated below by using the [examples.ncs/device-management/snmp-ned](https://github.com/NSO-developer/nso-examples/tree/6.6/device-management/snmp-ned) example. First, try to connect to all SNMP devices: ```cli admin@ncs# devices connect diff --git a/development/advanced-development/developing-packages.md b/development/advanced-development/developing-packages.md index ab1946fc..1ca91bfb 100644 --- a/development/advanced-development/developing-packages.md +++ b/development/advanced-development/developing-packages.md @@ -123,7 +123,7 @@ The `netsim` directory contains three files: 6. `%NAME%` - for the name of the ConfD instance. 7. `%COUNTER%` - for the number of the ConfD instance * The `Makefile` should compile the YANG files so that ConfD can run them. The `Makefile` should also have an `install` target that installs all files required for ConfD to run one instance of a simulated network element. This is typically all `fxs` files. -* An optional `start.sh` file where additional programs can be started. A good example of a package where the netsim component contains some additional C programs is the `webserver` package in [examples.ncs/service-management/website-service](https://github.com/NSO-developer/nso-examples/tree/6.5/service-management/website-service) example. +* An optional `start.sh` file where additional programs can be started. A good example of a package where the netsim component contains some additional C programs is the `webserver` package in [examples.ncs/service-management/website-service](https://github.com/NSO-developer/nso-examples/tree/6.6/service-management/website-service) example. Remember the picture of the network we wish to work with, there the routers, PE and CE, have an IP address and some additional data. So far here, we have generated a simulated network with YANG models. The routers in our simulated network have no data in them, we can log in to one of the routers to verify that: @@ -138,7 +138,7 @@ admin@zoe> exit The ConfD devices in our simulated network all have a Juniper CLI engine, thus we can, using the command `ncs-netsim cli [devicename]`, log in to an individual router. -To achieve this, we need to have some additional XML initializing files for the ConfD instances. It is the responsibility of the `install` target in the netsim Makefile to ensure that each ConfD instance gets initialized with the proper init data. In the NSO example collection, the [examples.ncs/service-management/mpls-vpn-java](https://github.com/NSO-developer/nso-examples/tree/6.5/service-management/mpls-vpn-java) and [examples.ncs/service-management/mpls-vpn-python](https://github.com/NSO-developer/nso-examples/tree/6.5/service-management/mpls-vpn-python) examples contain the two above-mentioned PE and CE packages but modified, so that the network elements in the simulated network get initialized properly. +To achieve this, we need to have some additional XML initializing files for the ConfD instances. It is the responsibility of the `install` target in the netsim Makefile to ensure that each ConfD instance gets initialized with the proper init data. In the NSO example collection, the [examples.ncs/service-management/mpls-vpn-java](https://github.com/NSO-developer/nso-examples/tree/6.6/service-management/mpls-vpn-java) and [examples.ncs/service-management/mpls-vpn-python](https://github.com/NSO-developer/nso-examples/tree/6.6/service-management/mpls-vpn-python) examples contain the two above-mentioned PE and CE packages but modified, so that the network elements in the simulated network get initialized properly. If we run that example in the NSO example collection we see: @@ -202,7 +202,7 @@ With the scripting mechanism, an end-user can add new functionality to NSO in a Scripts defined in an NSO package work pretty much as system-level scripts configured with the `/ncs-config/scripts/dir` configuration parameter. The difference is that the location of the scripts is predefined. The scripts directory must be named `scripts` and must be located in the top directory of the package. -In this complete example [examples.ncs/sdk-api/scripting](https://github.com/NSO-developer/nso-examples/tree/6.5/sdk-api/scripting), there is a `README` file and a simple post-commit script `packages/scripting/scripts/post-commit/show_diff.sh` as well as a simple command script `packages/scripting/scripts/command/echo.sh`. +In this complete example [examples.ncs/sdk-api/scripting](https://github.com/NSO-developer/nso-examples/tree/6.6/sdk-api/scripting), there is a `README` file and a simple post-commit script `packages/scripting/scripts/post-commit/show_diff.sh` as well as a simple command script `packages/scripting/scripts/command/echo.sh`. ## Creating a Service Package @@ -538,7 +538,7 @@ In debugging and error reporting, these root cause messages can be valuable to u * `verbose`: Show all messages for the chain of cause exceptions, if any. * `trace`: Show messages for the chain of cause exceptions with exception class and the trace for the bottom root cause. -Here is an example of how this can be used. In the [examples.ncs/service-management/website-service](https://github.com/NSO-developer/nso-examples/tree/6.5/service-management/website-service) example, we try to create a service without the necessary pre-preparations: +Here is an example of how this can be used. In the [examples.ncs/service-management/website-service](https://github.com/NSO-developer/nso-examples/tree/6.6/service-management/website-service) example, we try to create a service without the necessary pre-preparations: {% code title="Example: Setting Error Message Verbosity" %} ```cli diff --git a/development/advanced-development/developing-services/service-development-using-java.md b/development/advanced-development/developing-services/service-development-using-java.md index 5dbf4d95..6ae16d74 100644 --- a/development/advanced-development/developing-services/service-development-using-java.md +++ b/development/advanced-development/developing-services/service-development-using-java.md @@ -698,7 +698,7 @@ The steps to build the solution described in this section are: ## Layer 3 MPLS VPN Service -This service shows a more elaborate service mapping. It is based on the [examples.ncs/service-management/mpls-vpn-java](https://github.com/NSO-developer/nso-examples/tree/6.5/service-management/mpls-vpn-java) example. +This service shows a more elaborate service mapping. It is based on the [examples.ncs/service-management/mpls-vpn-java](https://github.com/NSO-developer/nso-examples/tree/6.6/service-management/mpls-vpn-java) example. MPLS VPNs are a type of Virtual Private Network (VPN) that achieves segmentation of network traffic using Multiprotocol Label Switching (MPLS), often found in Service Provider (SP) networks. The Layer 3 variant uses BGP to connect and distribute routes between sites of the VPN. @@ -751,7 +751,7 @@ The information needed to sort out what PE router a CE router is connected to as ### Creating a Multi-Vendor Service -This section describes the creation of an MPLS L3VPN service in a multi-vendor environment by applying the concepts described above. The example discussed can be found in [examples.ncs/service-management/mpls-vpn-java](https://github.com/NSO-developer/nso-examples/tree/6.5/service-management/mpls-vpn-java). The example network consists of Cisco ASR 9k and Juniper core routers (P and PE) and Cisco IOS-based CE routers. +This section describes the creation of an MPLS L3VPN service in a multi-vendor environment by applying the concepts described above. The example discussed can be found in [examples.ncs/service-management/mpls-vpn-java](https://github.com/NSO-developer/nso-examples/tree/6.6/service-management/mpls-vpn-java). The example network consists of Cisco ASR 9k and Juniper core routers (P and PE) and Cisco IOS-based CE routers. The goal of the NSO service is to set up an MPLS Layer3 VPN on a number of CE router endpoints using BGP as the CE-PE routing protocol. Connectivity between the CE and PE routers is done through a Layer2 Ethernet access network, which is out of the scope of this service. In a real-world scenario, the access network could for example be handled by another service. diff --git a/development/advanced-development/developing-services/services-deep-dive.md b/development/advanced-development/developing-services/services-deep-dive.md index 02af8ab8..baae09f7 100644 --- a/development/advanced-development/developing-services/services-deep-dive.md +++ b/development/advanced-development/developing-services/services-deep-dive.md @@ -112,7 +112,7 @@ Location of the plan data if the service plan is used. See [Nano Services for St -While not part of `ncs:service-data` as such, you may consider the `service-commit-queue-event` notification part of the core service interface. The notification provides information about the state of the service when the service uses the commit queue. As an example, an event-driven application uses this notification to find out when a service instance has been deployed to the devices. See the `showcase_rc.py` script in [examples.ncs/scaling-performance/perf-stack](https://github.com/NSO-developer/nso-examples/tree/6.5/scaling-performance/perf-stack) for sample Python code, leveraging the notification. See `tailf-ncs-services.yang` for the full definition of the notification. +While not part of `ncs:service-data` as such, you may consider the `service-commit-queue-event` notification part of the core service interface. The notification provides information about the state of the service when the service uses the commit queue. As an example, an event-driven application uses this notification to find out when a service instance has been deployed to the devices. See the `showcase_rc.py` script in [examples.ncs/scaling-performance/perf-stack](https://github.com/NSO-developer/nso-examples/tree/6.6/scaling-performance/perf-stack) for sample Python code, leveraging the notification. See `tailf-ncs-services.yang` for the full definition of the notification. NSO Service Manager is responsible for providing the functionality of the common service interface, requiring no additional user code. This interface is the same for classic and nano services, whereas nano services further extend the model. @@ -232,7 +232,7 @@ The Java callbacks use the following function arguments: * `service`: A NavuNode for the service instance. * `opaque`: Opaque service properties, see [Persistent Opaque Data](services-deep-dive.md#ch_svcref.opaque). -See [examples.ncs/service-management/iface-postmod-py](https://github.com/NSO-developer/nso-examples/tree/6.5/service-management/iface-postmod-py) and [examples.ncs/service-management/iface-postmod-java](https://github.com/NSO-developer/nso-examples/tree/6.5/service-management/iface-postmod-java) examples for a sample implementation of the post-modification callback. +See [examples.ncs/service-management/iface-postmod-py](https://github.com/NSO-developer/nso-examples/tree/6.6/service-management/iface-postmod-py) and [examples.ncs/service-management/iface-postmod-java](https://github.com/NSO-developer/nso-examples/tree/6.6/service-management/iface-postmod-java) examples for a sample implementation of the post-modification callback. Additionally, you may implement these callbacks with templates. Refer to [Service Callpoints and Templates](../../core-concepts/templates.md#ch_templates.servicepoint) for details. @@ -288,7 +288,7 @@ Compared to pre- and post-modification callbacks, which also persist data outsid ``` {% endcode %} -The [examples.ncs/service-management/iface-postmod-py](https://github.com/NSO-developer/nso-examples/tree/6.5/service-management/iface-postmod-py) and [examples.ncs/service-management/iface-postmod-java](https://github.com/NSO-developer/nso-examples/tree/6.5/service-management/iface-postmod-java) examples showcase the use of opaque properties. +The [examples.ncs/service-management/iface-postmod-py](https://github.com/NSO-developer/nso-examples/tree/6.6/service-management/iface-postmod-py) and [examples.ncs/service-management/iface-postmod-java](https://github.com/NSO-developer/nso-examples/tree/6.6/service-management/iface-postmod-java) examples showcase the use of opaque properties. ## Defining Static Service Conflicts @@ -326,7 +326,7 @@ Furthermore, containers and list items created using the `sharedCreate()` and `s `backpointer` points back to the service instance that created the entity in the first place. This makes it possible to look at part of the configuration, say under `/devices` tree, and answer the question: which parts of the device configuration were created by which service? -To see reference counting in action, start the [examples.ncs/service-management/implement-a-service/iface-v3](https://github.com/NSO-developer/nso-examples/tree/6.5/service-management/implement-a-service/iface-v3) example with `make demo` and configure a service instance. +To see reference counting in action, start the [examples.ncs/service-management/implement-a-service/iface-v3](https://github.com/NSO-developer/nso-examples/tree/6.6/service-management/implement-a-service/iface-v3) example with `make demo` and configure a service instance. ```bash admin@ncs(config)# iface instance1 device c1 interface 0/1 ip-address 10.1.2.3 cidr-netmask 28 @@ -411,7 +411,7 @@ Then you create a higher-level service, say a CFS, that configures another servi ``` {% endcode %} -The preceding example references an existing `iface` service, such as the one in the [examples.ncs/service-management/implement-a-service/iface-v3](https://github.com/NSO-developer/nso-examples/tree/6.5/service-management/implement-a-service/iface-v3) example. The output shows hard-coded values but you can change those as you would for any other service. +The preceding example references an existing `iface` service, such as the one in the [examples.ncs/service-management/implement-a-service/iface-v3](https://github.com/NSO-developer/nso-examples/tree/6.6/service-management/implement-a-service/iface-v3) example. The output shows hard-coded values but you can change those as you would for any other service. In practice, you might find it beneficial to modularize your data model and potentially reuse parts in both, the lower- and higher-level service. This avoids duplication while still allowing you to directly expose some of the lower-level service functionality through the higher-level model. @@ -777,7 +777,7 @@ This approach provides an excellent way to maintain an overview of services depl To address this, we can nest the services within another list. By organizing all services under a common structure, we enable the ability to view and manage multiple service types for a device in a unified manner, providing a comprehensive overview with a single command. -To illustrate this approach, we need to introduce another service type. Moving beyond the dummy example, let’s use a more realistic scenario: the [mpls-vpn-simple](https://github.com/NSO-developer/nso-examples/tree/6.5/service-management/mpls-vpn-simple) example. We'll refactor this service to adopt the stacked service approach while maintaining the existing customer-facing interface. +To illustrate this approach, we need to introduce another service type. Moving beyond the dummy example, let’s use a more realistic scenario: the [mpls-vpn-simple](https://github.com/NSO-developer/nso-examples/tree/6.6/service-management/mpls-vpn-simple) example. We'll refactor this service to adopt the stacked service approach while maintaining the existing customer-facing interface. After the refactor, the service will shift from provisioning multiple devices directly through a single instance to creating a separate service instance for each device, VPN, and endpoint, what we call resource-facing services. These resource-facing services will be structured so that all device-specific services are grouped under a node for each device. @@ -986,7 +986,7 @@ You may also obtain some useful information by using the `debug service` commit However, the service may also delete data implicitly, through `when` and `choice` statements in the YANG data model. If a `when` statement evaluates to false, the configuration tree below that node is deleted. Likewise, if a `case` is set in a `choice` statement, the previously set `case` is deleted. This has the same limitations as an explicit delete. \ - To avoid these issues, create a separate service, that only handles deletion, and use it in the main service through the stacked service design (see [Stacked Services](services-deep-dive.md#ch_svcref.stacking)). This approach allows you to reference count the deletion operation and contains the effect of restoring deleted data through a small, rarely-changing helper service. See [examples.ncs/service-management/shared-delete](https://github.com/NSO-developer/nso-examples/tree/6.5/service-management/shared-delete) for an example. + To avoid these issues, create a separate service, that only handles deletion, and use it in the main service through the stacked service design (see [Stacked Services](services-deep-dive.md#ch_svcref.stacking)). This approach allows you to reference count the deletion operation and contains the effect of restoring deleted data through a small, rarely-changing helper service. See [examples.ncs/service-management/shared-delete](https://github.com/NSO-developer/nso-examples/tree/6.6/service-management/shared-delete) for an example. \ Alternatively, you might consider pre- and post-modification callbacks for some specific cases. @@ -1001,7 +1001,7 @@ You may also obtain some useful information by using the `debug service` commit ``` \ - Likewise, do not use other MAAPI `load_config` variants from the service code. Use the `loadConfigCmds()` or `sharedSetValues()` function to load XML data from a file or a string. See [examples.ncs/scaling-performance/perf-bulkcreate](https://github.com/NSO-developer/nso-examples/tree/6.5/scaling-performance/perf-bulkcreate) for an example. + Likewise, do not use other MAAPI `load_config` variants from the service code. Use the `loadConfigCmds()` or `sharedSetValues()` function to load XML data from a file or a string. See [examples.ncs/scaling-performance/perf-bulkcreate](https://github.com/NSO-developer/nso-examples/tree/6.6/scaling-performance/perf-bulkcreate) for an example. * **Reordering ordered-by-user lists**: If the service code rearranges an ordered-by-user list with items that were created by another service, that other service becomes out of sync. In some cases, you might be able to avoid out-of-sync scenarios by leveraging special XML template syntax (see [Operations on ordered lists and leaf-lists](../../core-concepts/templates.md#ch_templates.order_ops)) or using service stacking with a helper service. In general, however, you should reconsider your design and try to avoid such scenarios. @@ -1033,7 +1033,7 @@ A prerequisite (or possibly the product in an iterative approach) is an NSO serv Alternatively, some parts of the configuration could be managed as out-of-band, in order to simplify and expedite the development of the service model and the mapping logic. But out-of-band data has more limitations when used with service updates. See [Out-of-band Interoperation](../../../operation-and-usage/operations/out-of-band-interoperation.md) for specific disadvantages and carefully consider if out-of-band data is really the right choice. -In the simplest case, there is only one variant and that is the one that the service needs to produce. Let's take the [examples.ncs/service-management/implement-a-service/iface-v2-py](https://github.com/NSO-developer/nso-examples/tree/6.5/service-management/implement-a-service/iface-v2-py) example and consider what happens when a device already has an existing interface configuration. +In the simplest case, there is only one variant and that is the one that the service needs to produce. Let's take the [examples.ncs/service-management/implement-a-service/iface-v2-py](https://github.com/NSO-developer/nso-examples/tree/6.6/service-management/implement-a-service/iface-v2-py) example and consider what happens when a device already has an existing interface configuration. ```bash admin@ncs# show running-config devices device c1 config\ @@ -1351,7 +1351,7 @@ admin@ncs# iface instance2 re-deploy reconcile Nevertheless, keep in mind that the discard-non-service-config reconcile operation only considers parts of the device configuration under nodes that are created with the service mapping. Even if all data there is covered in the mapping, there could still be other parts that belong to the service but reside in an entirely different section of the device configuration (say DNS configuration under `ip name-server`, which is outside the `interface GigabitEthernet` part) or even a different device. That kind of configuration the `discard-non-service-config` option cannot find on its own and you must add manually. -You can find the complete `iface` service as part of the [examples.ncs/service-management/discovery](https://github.com/NSO-developer/nso-examples/tree/6.5/service-management/discovery) example. +You can find the complete `iface` service as part of the [examples.ncs/service-management/discovery](https://github.com/NSO-developer/nso-examples/tree/6.6/service-management/discovery) example. Since there were only two service instances to reconcile, the process is now complete. In practice, you are likely to encounter multiple variants and many more service instances, requiring you to make additional iterations. But you can follow the iterative process shown here. @@ -1371,7 +1371,7 @@ It is important to note that `partial-sync-from` and `partial-sync-to` clear the Pulling the configuration from the network needs to be initiated outside the service code. At the same time, the list of configuration subtrees required by a certain service should be maintained by the service developer. Hence it is a good practice for such a service to implement a wrapper action that invokes the generic `/devices/partial-sync-from` action with the correct list of paths. The user or application that manages the service would only need to invoke the wrapper action without needing to know which parts of the configuration the service is interested in. -The snippet in the example below shows running the `partial-sync-from` action via Java, using the `router` device from the [examples.ncs/device-management/router-network](https://github.com/NSO-developer/nso-examples/tree/6.5/device-management/router-network) example. +The snippet in the example below shows running the `partial-sync-from` action via Java, using the `router` device from the [examples.ncs/device-management/router-network](https://github.com/NSO-developer/nso-examples/tree/6.6/device-management/router-network) example. {% code title="Example of Running partial-sync-from Action via Java API" %} ```java diff --git a/development/advanced-development/kicker.md b/development/advanced-development/kicker.md index 538f8940..2c324fd7 100644 --- a/development/advanced-development/kicker.md +++ b/development/advanced-development/kicker.md @@ -249,7 +249,7 @@ Monitor expressions are expanded and installed in an internal data structure at ### A Simple Data Kicker Example -This example is part of the [examples.ncs/service-management/website-service](https://github.com/NSO-developer/nso-examples/tree/6.5/service-management/website-service) example. It consists of an action and a `README_KICKER` file. For all kickers defined in this example, the same action is used. This action is defined in the `website-service` package. +This example is part of the [examples.ncs/service-management/website-service](https://github.com/NSO-developer/nso-examples/tree/6.6/service-management/website-service) example. It consists of an action and a `README_KICKER` file. For all kickers defined in this example, the same action is used. This action is defined in the `website-service` package. The following is the YANG snippet for the action definition from the `website.yang` file: @@ -334,7 +334,7 @@ class WebSiteServiceRFS { } ``` -We are now ready to start the [examples.ncs/service-management/website-service](https://github.com/NSO-developer/nso-examples/tree/6.5/service-management/website-service) example and define our data kicker. Do the following: +We are now ready to start the [examples.ncs/service-management/website-service](https://github.com/NSO-developer/nso-examples/tree/6.6/service-management/website-service) example and define our data kicker. Do the following: ```bash $ make all @@ -498,7 +498,7 @@ When using both, serializer and priority, only kickers with the same serializer In this example, we use the same action and setup as in the data kicker example above. The procedure for starting is also the same. -The [examples.ncs/service-management/website-service](https://github.com/NSO-developer/nso-examples/tree/6.5/service-management/website-service) example has devices that have notifications generated on the stream "interface". We start with defining the notification kicker for a certain `SUBSCRIPTION_NAME = 'mysub'`. This subscription does not exist for the moment and the kicker will therefore not be triggered: +The [examples.ncs/service-management/website-service](https://github.com/NSO-developer/nso-examples/tree/6.6/service-management/website-service) example has devices that have notifications generated on the stream "interface". We start with defining the notification kicker for a certain `SUBSCRIPTION_NAME = 'mysub'`. This subscription does not exist for the moment and the kicker will therefore not be triggered: ```cli admin@ncs# config diff --git a/development/advanced-development/scaling-and-performance-optimization.md b/development/advanced-development/scaling-and-performance-optimization.md index 052e3e73..b8b5e960 100644 --- a/development/advanced-development/scaling-and-performance-optimization.md +++ b/development/advanced-development/scaling-and-performance-optimization.md @@ -194,7 +194,7 @@ For progress trace documentation, see [Progress Trace](progress-trace.md). ### Running the `perf-trans` Example Using a Single Transaction -The [examples.ncs/scaling-performance/perf-trans](https://github.com/NSO-developer/nso-examples/tree/6.5/scaling-performance/perf-trans) example from the NSO example set explores the opportunities to improve the wall-clock time performance and utilization, as well as opportunities to avoid common pitfalls. +The [examples.ncs/scaling-performance/perf-trans](https://github.com/NSO-developer/nso-examples/tree/6.6/scaling-performance/perf-trans) example from the NSO example set explores the opportunities to improve the wall-clock time performance and utilization, as well as opportunities to avoid common pitfalls. The example uses simulated CPU loads for service creation and validation work. Device work is simulated with `sleep()` as it will not run on the same processor in a production system. @@ -202,15 +202,15 @@ The example shows how NSO can benefit from running many transactions concurrentl The provided code sets up an NSO instance that exports tracing data to a `.csv` file, provisions one or more service instances, which each map to a device, and shows different (average) transaction times and a graph to visualize the sequences plus concurrency. -Play with the [examples.ncs/scaling-performance/perf-trans](https://github.com/NSO-developer/nso-examples/tree/6.5/scaling-performance/perf-trans) example by tweaking the `measure.py` script parameters: +Play with the [examples.ncs/scaling-performance/perf-trans](https://github.com/NSO-developer/nso-examples/tree/6.6/scaling-performance/perf-trans) example by tweaking the `measure.py` script parameters: ```code plain patch ``` -See the README in the [examples.ncs/scaling-performance/perf-trans](https://github.com/NSO-developer/nso-examples/tree/6.5/scaling-performance/perf-trans) example for details. +See the README in the [examples.ncs/scaling-performance/perf-trans](https://github.com/NSO-developer/nso-examples/tree/6.6/scaling-performance/perf-trans) example for details. -To run the [examples.ncs/scaling-performance/perf-trans](https://github.com/NSO-developer/nso-examples/tree/6.5/scaling-performance/perf-trans) example from the NSO example set and recreate the variant shown in the progress trace above: +To run the [examples.ncs/scaling-performance/perf-trans](https://github.com/NSO-developer/nso-examples/tree/6.6/scaling-performance/perf-trans) example from the NSO example set and recreate the variant shown in the progress trace above: ```bash cd $NCS_DIR/examples.ncs/scaling-performance/perf-trans @@ -278,9 +278,9 @@ Suppose a service creates a significant amount of configuration data for devices #### **Running the `perf-bulkcreate` Example Using a Single Call to MAAPI `shared_set_values()`** -The [examples.ncs/scaling-performance/perf-bulkcreate](https://github.com/NSO-developer/nso-examples/blob/main/scaling-performance/perf-bulkcreate) example writes configuration to an access control list and a route list of a Cisco Adaptive Security Appliance (ASA) device. It uses either MAAPI Python with a configuration template, `create()` and `set()` calls, Python `shared_set_values()` and `load_config_cmds()`, or Java `sharedSetValues()` and `loadConfigCmds()` to write the configuration in XML format. +The [examples.ncs/scaling-performance/perf-bulkcreate](https://github.com/NSO-developer/nso-examples/tree/6.6/scaling-performance/perf-bulkcreate) example writes configuration to an access control list and a route list of a Cisco Adaptive Security Appliance (ASA) device. It uses either MAAPI Python with a configuration template, `create()` and `set()` calls, Python `shared_set_values()` and `load_config_cmds()`, or Java `sharedSetValues()` and `loadConfigCmds()` to write the configuration in XML format. -To run the [examples.ncs/scaling-performance/perf-bulkcreate](https://github.com/NSO-developer/nso-examples/blob/main/scaling-performance/perf-bulkcreate) example using MAAPI Python `create()` and `set()` calls to create 3000 rules and 3000 routes on one device: +To run the [examples.ncs/scaling-performance/perf-bulkcreate](https://github.com/NSO-developer/nso-examples/tree/6.6/scaling-performance/perf-bulkcreate) example using MAAPI Python `create()` and `set()` calls to create 3000 rules and 3000 routes on one device: ```bash cd $NCS_DIR/examples.ncs/scaling-performance/perf-bulkcreate @@ -291,7 +291,7 @@ The commit uses the `no-networking` parameter to skip pushing the configuration
-Next, run the [examples.ncs/scaling-performance/perf-bulkcreate](https://github.com/NSO-developer/nso-examples/blob/main/scaling-performance/perf-bulkcreate) example using a single MAAPI Python `shared_set_values()` call to create 3000 rules and 3000 routes on one device: +Next, run the [examples.ncs/scaling-performance/perf-bulkcreate](https://github.com/NSO-developer/nso-examples/tree/6.6/scaling-performance/perf-bulkcreate) example using a single MAAPI Python `shared_set_values()` call to create 3000 rules and 3000 routes on one device: ``` ./measure.sh -r 3000 -t py_setvals_xml -n true @@ -319,7 +319,7 @@ Writing to devices and other network elements that are slow to configure will st ### Running the `perf-trans` Example Using One Transaction per Device -Dividing the service creation and validation work into two separate transactions, one per device, allows the work to be spread across two CPU cores in a multi-core processor. To run the [examples.ncs/scaling-performance/perf-trans](https://github.com/NSO-developer/nso-examples/tree/6.5/scaling-performance/perf-trans) example with the work divided into one transaction per device: +Dividing the service creation and validation work into two separate transactions, one per device, allows the work to be spread across two CPU cores in a multi-core processor. To run the [examples.ncs/scaling-performance/perf-trans](https://github.com/NSO-developer/nso-examples/tree/6.6/scaling-performance/perf-trans) example with the work divided into one transaction per device: ```bash cd $NCS_DIR/examples.ncs/scaling-performance/perf-trans @@ -359,7 +359,7 @@ For commit queue documentation, see [Commit Queue](../../operation-and-usage/ope ### Enabling Commit Queues for the `perf-trans` Example -Enabling commit queues allows the two transactions to spread the create, validation, and configuration push to devices work across CPU cores in a multi-core processor. Only the CDB write and commit queue write now remain inside the critical section, and the transaction lock is released as soon as the device configuration changes have been written to the commit queues instead of waiting for the config push to the devices to complete. To run the [examples.ncs/scaling-performance/perf-trans](https://github.com/NSO-developer/nso-examples/tree/6.5/scaling-performance/perf-trans) example with the work divided into one transaction per device and commit queues enabled: +Enabling commit queues allows the two transactions to spread the create, validation, and configuration push to devices work across CPU cores in a multi-core processor. Only the CDB write and commit queue write now remain inside the critical section, and the transaction lock is released as soon as the device configuration changes have been written to the commit queues instead of waiting for the config push to the devices to complete. To run the [examples.ncs/scaling-performance/perf-trans](https://github.com/NSO-developer/nso-examples/tree/6.6/scaling-performance/perf-trans) example with the work divided into one transaction per device and commit queues enabled: ```bash make stop clean NDEVS=2 python @@ -390,11 +390,11 @@ Stop NSO and the netsim devices: make stop ``` -Running the [examples.ncs/scaling-performance/perf-bulkcreate](https://github.com/NSO-developer/nso-examples/blob/main/scaling-performance/perf-bulkcreate) example with two devices and commit queues enabled will produce a similar result. +Running the [examples.ncs/scaling-performance/perf-bulkcreate](https://github.com/NSO-developer/nso-examples/tree/6.6/scaling-performance/perf-bulkcreate) example with two devices and commit queues enabled will produce a similar result. ### Simplify the Per-Device Concurrent Transaction Creation Using a Nano Service -The [examples.ncs/scaling-performance/perf-trans](https://github.com/NSO-developer/nso-examples/tree/6.5/scaling-performance/perf-trans) example service uses one transaction per service instance where each service instance configures one device. This enables transactions to run concurrently on separate CPU cores in a multi-core processor. The example sends RESTCONF `patch` requests concurrently to start transactions that run concurrently with the NSO transaction manager. However, dividing the work into multiple processes may not be practical for some applications using the NSO northbound interfaces, e.g., CLI or RESTCONF. Also, it makes a future migration to LSA more complex. +The [examples.ncs/scaling-performance/perf-trans](https://github.com/NSO-developer/nso-examples/tree/6.6/scaling-performance/perf-trans) example service uses one transaction per service instance where each service instance configures one device. This enables transactions to run concurrently on separate CPU cores in a multi-core processor. The example sends RESTCONF `patch` requests concurrently to start transactions that run concurrently with the NSO transaction manager. However, dividing the work into multiple processes may not be practical for some applications using the NSO northbound interfaces, e.g., CLI or RESTCONF. Also, it makes a future migration to LSA more complex. To simplify the NSO manager application, a resource-facing nano service (RFS) can start a process per service instance. The NSO manager application or user can then use a single transaction, e.g., CLI or RESTCONF, to configure multiple service instances where the NSO nano service divides the service instances into transactions running concurrently in separate processes. @@ -416,7 +416,7 @@ Furthermore, the time spent calculating the diff-set, as seen with the `saving r ### Running the CFS and Nano Service enabled `perf-stack` Example -The [examples.ncs/scaling-performance/perf-stack](https://github.com/NSO-developer/nso-examples/tree/6.5/scaling-performance/perf-stack) example showcases how a CFS on top of a simple resource-facing nano service can be implemented with the [examples.ncs/scaling-performance/perf-trans](https://github.com/NSO-developer/nso-examples/tree/6.5/scaling-performance/perf-trans) example by modifying the existing t3 RFS and adding a CFS. Instead of multiple RESTCONF transactions, the example uses a single CLI CFS service commit that updates the desired number of service instances. The commit configures multiple service instances in a single transaction where the nano service runs each service instance in a separate process to allow multiple cores to be used concurrently. +The [examples.ncs/scaling-performance/perf-stack](https://github.com/NSO-developer/nso-examples/tree/6.6/scaling-performance/perf-stack) example showcases how a CFS on top of a simple resource-facing nano service can be implemented with the [examples.ncs/scaling-performance/perf-trans](https://github.com/NSO-developer/nso-examples/tree/6.6/scaling-performance/perf-trans) example by modifying the existing t3 RFS and adding a CFS. Instead of multiple RESTCONF transactions, the example uses a single CLI CFS service commit that updates the desired number of service instances. The commit configures multiple service instances in a single transaction where the nano service runs each service instance in a separate process to allow multiple cores to be used concurrently.
@@ -444,7 +444,7 @@ commit trans=2 RFS nwork=1 nwork=1 cq=True device ddelay=1 wall-clock 1s 1s 1s=3s ``` -The two transactions run concurrently, deploying the service in \~3 seconds (plus some overhead) of wall-clock time. Like the [examples.ncs/scaling-performance/perf-trans](https://github.com/NSO-developer/nso-examples/tree/6.5/scaling-performance/perf-trans) example, you can play around with the [examples.ncs/scaling-performance/perf-stack](https://github.com/NSO-developer/nso-examples/tree/6.5/scaling-performance/perf-stack) example by tweaking the parameters. +The two transactions run concurrently, deploying the service in \~3 seconds (plus some overhead) of wall-clock time. Like the [examples.ncs/scaling-performance/perf-trans](https://github.com/NSO-developer/nso-examples/tree/6.6/scaling-performance/perf-trans) example, you can play around with the [examples.ncs/scaling-performance/perf-stack](https://github.com/NSO-developer/nso-examples/tree/6.6/scaling-performance/perf-stack) example by tweaking the parameters. ``` -d NDEVS @@ -473,7 +473,7 @@ The two transactions run concurrently, deploying the service in \~3 seconds (plu Default: 1 second ``` -See the `README` in the [examples.ncs/scaling-performance/perf-stack](https://github.com/NSO-developer/nso-examples/tree/6.5/scaling-performance/perf-stack) example for details. For even more details, see the steps in the `showcase` script. +See the `README` in the [examples.ncs/scaling-performance/perf-stack](https://github.com/NSO-developer/nso-examples/tree/6.6/scaling-performance/perf-stack) example for details. For even more details, see the steps in the `showcase` script. Stop NSO and the netsim devices: @@ -483,7 +483,7 @@ make stop ### Migrating to and Scale Up Using an LSA Setup -If the processor where NSO runs becomes a severe bottleneck, the CFS can migrate to a layered service architecture (LSA) setup. The [examples.ncs/scaling-performance/perf-stack](https://github.com/NSO-developer/nso-examples/tree/6.5/scaling-performance/perf-stack) example implements stacked services, a CFS abstracting the RFS. It allows for easy migration to an LSA setup to scale with the number of devices or network elements participating in the service deployment. While adding complexity, LSA allows exposing a single CFS instance for all processors instead of one per processor. +If the processor where NSO runs becomes a severe bottleneck, the CFS can migrate to a layered service architecture (LSA) setup. The [examples.ncs/scaling-performance/perf-stack](https://github.com/NSO-developer/nso-examples/tree/6.6/scaling-performance/perf-stack) example implements stacked services, a CFS abstracting the RFS. It allows for easy migration to an LSA setup to scale with the number of devices or network elements participating in the service deployment. While adding complexity, LSA allows exposing a single CFS instance for all processors instead of one per processor. {% hint style="info" %} Before considering taking on the complexity of a multi-NSO node LSA setup, make sure you have done the following: @@ -502,7 +502,7 @@ Migrating to an LSA setup should only be considered after checking all boxes for ### Running the LSA-enabled `perf-lsa` Example -The [examples.ncs/scaling-performance/perf-lsa](https://github.com/NSO-developer/nso-examples/tree/6.5/scaling-performance/perf-lsa) example builds on the [examples.ncs/scaling-performance/perf-stack](https://github.com/NSO-developer/nso-examples/tree/6.5/scaling-performance/perf-stack) example and showcases an LSA setup using two RFS NSO instances, `lower-nso-1` and `lower-nso-2`, with a CFS NSO instance, `upper-nso`. +The [examples.ncs/scaling-performance/perf-lsa](https://github.com/NSO-developer/nso-examples/tree/6.6/scaling-performance/perf-lsa) example builds on the [examples.ncs/scaling-performance/perf-stack](https://github.com/NSO-developer/nso-examples/tree/6.6/scaling-performance/perf-stack) example and showcases an LSA setup using two RFS NSO instances, `lower-nso-1` and `lower-nso-2`, with a CFS NSO instance, `upper-nso`.
@@ -540,7 +540,7 @@ commit ntrans=2 RFS 1 nwork=1 nwork=1 cq=True device ddelay=1 The four transactions run concurrently, two per RFS node, performing the work and configuring the four devices in \~3 seconds (plus some overhead) of wall-clock time. -You can play with the [examples.ncs/scaling-performance/perf-lsa](https://github.com/NSO-developer/nso-examples/tree/6.5/scaling-performance/perf-lsa) example by tweaking the parameters. +You can play with the [examples.ncs/scaling-performance/perf-lsa](https://github.com/NSO-developer/nso-examples/tree/6.6/scaling-performance/perf-lsa) example by tweaking the parameters. ``` -d LDEVS @@ -571,7 +571,7 @@ You can play with the [examples.ncs/scaling-performance/perf-lsa](https://github Default: 1 second ``` -See the `README` in the [examples.ncs/scaling-performance/perf-lsa](https://github.com/NSO-developer/nso-examples/tree/6.5/scaling-performance/perf-lsa) example for details. For even more details, see the steps in the `showcase` script. +See the `README` in the [examples.ncs/scaling-performance/perf-lsa](https://github.com/NSO-developer/nso-examples/tree/6.6/scaling-performance/perf-lsa) example for details. For even more details, see the steps in the `showcase` script. Stop NSO and the netsim devices: @@ -615,31 +615,6 @@ For small NSO systems, the schema will usually consume more resources than the i NEDs with a large schema and many YANG models often include a significant number of YANG models that are unused. If RAM usage is an issue, consider removing unused YANG models from such NEDs. {% endhint %} -#### Total Committed Memory Impact with Multiple Python VMs - -Note that the schema is memory-mapped into shared memory, so even though multiple Python VMs might be started, resident memory usage will not increase proportionally, as the schema is shared between different clients. However, total committed memory (`Committed_AS`) will increase and may cause issues if the `schema size * number of Python VMs` is significant enough that `CommitLimit` is reached. - -If increasing the available RAM is not an option, a workaround can be to have all, or a selected subset, of Python-based packages share a `vm-name` and run in the same Python VM thread. - -#### Sharing a Python VM Across Packages - -To share a Python VM, set the same `vm-name` in each package’s `package-meta-data.xml` file: - -{% code title="package-meta-data.xml vm-name config example" overflow="wrap" %} -```xml - - ... - - shared - threading - - ... - -``` -{% endcode %} - -See [The package-meta-data.xml File](../core-concepts/packages.md#d5e4962) for more details. See [Enable Strict Overcommit Accounting](../../administration/installation-and-deployment/system-install.md#enable-strict-overcommit-accounting-on-the-host) or [Overcommit Inside a Container](../../administration/installation-and-deployment/containerized-nso.md#d5e8605) for `Committed_AS` and `CommitLimit` details. - #### Note on the Java VM The Java VM uses its own copy of the schema, which is also why the JVM memory consumption follows the size of the loaded YANG schema. diff --git a/development/connected-topics/encryption-keys.md b/development/connected-topics/encryption-keys.md index 526bb78a..0dae6080 100644 --- a/development/connected-topics/encryption-keys.md +++ b/development/connected-topics/encryption-keys.md @@ -57,7 +57,7 @@ Example error output: ERROR=error message ``` -Below is a complete example of an application written in Python providing encryption keys from a plain text file. The application is included in the [examples.ncs/sdk-api/external-encryption-keys](https://github.com/NSO-developer/nso-examples/tree/6.5/sdk-api/external-encryption-keys) example: +Below is a complete example of an application written in Python providing encryption keys from a plain text file. The application is included in the [examples.ncs/sdk-api/external-encryption-keys](https://github.com/NSO-developer/nso-examples/tree/6.6/sdk-api/external-encryption-keys) example: ```python #!/usr/bin/env python3 diff --git a/development/connected-topics/scheduler.md b/development/connected-topics/scheduler.md index ed8988bb..7c1e30a3 100644 --- a/development/connected-topics/scheduler.md +++ b/development/connected-topics/scheduler.md @@ -67,7 +67,7 @@ The following list describes the legal special characters and how you can use th ### Scheduling Periodic Compaction -[Compaction](../../administration/advanced-topics/cdb-persistence.md#compaction) in NSO can take a considerable amount of time, during which transactions could be blocked. To avoid disruption, it might be advantageous to schedule compaction during times of low NSO utilization. This can be done using the NSO scheduler and a service. See [examples.ncs/misc/periodic-compaction](https://github.com/NSO-developer/nso-examples/tree/6.5/misc/periodic-compaction) for an example that demonstrates how to create a periodic compaction service that can be scheduled using the NSO scheduler. +[Compaction](../../administration/advanced-topics/cdb-persistence.md#compaction) in NSO can take a considerable amount of time, during which transactions could be blocked. To avoid disruption, it might be advantageous to schedule compaction during times of low NSO utilization. This can be done using the NSO scheduler and a service. See [examples.ncs/misc/periodic-compaction](https://github.com/NSO-developer/nso-examples/tree/6.6/misc/periodic-compaction) for an example that demonstrates how to create a periodic compaction service that can be scheduled using the NSO scheduler. ## Scheduling Non-recurring Work diff --git a/development/connected-topics/snmp-notification-receiver.md b/development/connected-topics/snmp-notification-receiver.md index d7010039..a680e834 100644 --- a/development/connected-topics/snmp-notification-receiver.md +++ b/development/connected-topics/snmp-notification-receiver.md @@ -53,7 +53,7 @@ NSO uses the Java package SNMP4J to parse the SNMP PDUs. Notification Handlers are user-supplied Java classes that implement the `com.tailf.snmp.snmp4j.NotificationHandler` interface. The `processPDU` method is expected to react on the SNMP4J event, e.g. by mapping the PDU to an NSO alarm. The handlers are registered in the `NotificationReceiver`. The `NotificationReceiver` is the main class that, in addition to maintaining the handlers, also has the responsibility to read the NSO SNMP notification configuration and set up `SNMP4J` listeners accordingly. -An example of a notification handler can be found at [examples.ncs/device-management/snmp-notification-receiver](https://github.com/NSO-developer/nso-examples/tree/6.5/device-management/snmp-notification-receiver). This example handler receives notifications and sets an alarm text if the notification is an `IF-MIB::linkDown trap`. +An example of a notification handler can be found at [examples.ncs/device-management/snmp-notification-receiver](https://github.com/NSO-developer/nso-examples/tree/6.6/device-management/snmp-notification-receiver). This example handler receives notifications and sets an alarm text if the notification is an `IF-MIB::linkDown trap`. ```java public class ExampleHandler implements NotificationHandler { diff --git a/development/core-concepts/api-overview/java-api-overview.md b/development/core-concepts/api-overview/java-api-overview.md index 518c877f..8c17eb7c 100644 --- a/development/core-concepts/api-overview/java-api-overview.md +++ b/development/core-concepts/api-overview/java-api-overview.md @@ -283,7 +283,7 @@ Write operations that do not attempt to obtain the subscription lock, are allowe To view registered subscribers, use the `ncs --status` command. For details on how to use the different subscription functions, see the Javadoc for NSO Java API. -The code in the [examples.ncs/sdk-api/cdb-java](https://github.com/NSO-developer/nso-examples/tree/6.5/sdk-api/cdb-java) example illustrates three different types of CDB subscribers: +The code in the [examples.ncs/sdk-api/cdb-java](https://github.com/NSO-developer/nso-examples/tree/6.6/sdk-api/cdb-java) example illustrates three different types of CDB subscribers: * A simple CDB config subscriber that utilizes the low-level CDB API directly to subscribe to changes in the subtree of the configuration. * Two Navu CDB subscribers, one subscribing to configuration changes, and one subscribing to changes in operational data. @@ -292,7 +292,7 @@ The code in the [examples.ncs/sdk-api/cdb-java](https://github.com/NSO-developer The DP API makes it possible to create callbacks which are called when certain events occur in NSO. As the name of the API indicates, it is possible to write data provider callbacks that provide data to NSO that is stored externally. However, this is only one of several callback types provided by this API. There exist callback interfaces for the following types: -* Service Callbacks - invoked for service callpoints in the YANG model. Implements service to device information mappings. See, for example, [examples.ncs/service-management/rfs-service](https://github.com/NSO-developer/nso-examples/tree/6.5/service-management/rfs-service). +* Service Callbacks - invoked for service callpoints in the YANG model. Implements service to device information mappings. See, for example, [examples.ncs/service-management/rfs-service](https://github.com/NSO-developer/nso-examples/tree/6.6/service-management/rfs-service). * Action Callbacks - invoked for a certain action in the YANG model which is defined with a callpoint directive. * Authentication Callbacks - invoked for external authentication functions. * Authorization Callbacks - invoked for external authorization of operations and data. Note, avoid this callback if possible since performance will otherwise be affected. @@ -417,7 +417,7 @@ We also have two additional optional callbacks that may be implemented for effic * `getObject()`: If this optional callback is implemented, the work of the callback is to return an entire `object`, i.e., a list instance. This is not the same `getObject()` as the one that is used in combination with the `iterator()` * `numInstances()`: When NSO needs to figure out how many instances we have of a certain element, by default NSO will repeatedly invoke the `iterator()` callback. If this callback is installed, it will be called instead. -The following example illustrates an external data provider. The example is possible to run from the examples collection. It resides under [examples.ncs/sdk-api/external-db](https://github.com/NSO-developer/nso-examples/tree/6.5/sdk-api/external-db). +The following example illustrates an external data provider. The example is possible to run from the examples collection. It resides under [examples.ncs/sdk-api/external-db](https://github.com/NSO-developer/nso-examples/tree/6.6/sdk-api/external-db). The example comes with a tailor-made database - `MyDb`. That source code is provided with the example but not shown here. However, the functionality will be obvious from the method names like `newItem()`, `lock()`, `save()`, etc. @@ -684,7 +684,7 @@ The action callbacks are: * `init()` Similar to the transaction `init()` callback. However note that, unlike the case with transaction and data callbacks, both `init()` and `action()` are registered for each `actionpoint` (i.e. different action points can have different `init()` callbacks), and there is no `finish()` callback - the action is completed when the `action()` callback returns. * `action()` This callback is invoked to actually execute the `rpc` or `action`. It receives the input parameters (if any) and returns the output parameters (if any). -In the [examples.ncs/service-management/mpls-vpn-java](https://github.com/NSO-developer/nso-examples/tree/6.5/service-management/mpls-vpn-java) example, we can define a `self-test` action. In the `packages/l3vpn/src/yang/l3vpn.yang`, we locate the service callback definition: +In the [examples.ncs/service-management/mpls-vpn-java](https://github.com/NSO-developer/nso-examples/tree/6.6/service-management/mpls-vpn-java) example, we can define a `self-test` action. In the `packages/l3vpn/src/yang/l3vpn.yang`, we locate the service callback definition: ``` uses ncs:service-data; @@ -761,38 +761,32 @@ The transaction validation callbacks are: * `init()`: This callback is invoked when the validation phase starts. It will typically attach to the current transaction: -{% code title="Example: Attach Maapi to the Current Transaction" %} -```` -``` - public class SimpleValidator implements DpTransValidateCallback{ - ... - @TransValidateCallback(callType=TransValidateCBType.INIT) - public void init(DpTrans trans) throws DpCallbackException{ - try { - th = trans.thandle; - maapi.attach(th, new MyNamesapce().hash(), trans.uinfo.usid); - .. - } catch(Exception e) { - throw new DpCallbackException("failed to attach via maapi: "+ - e.getMessage()); - } - } +{% code title="Example: Attach Maapi to the Current Transaction" overflow="wrap" %} +```java +public class SimpleValidator implements DpTransValidateCallback{ + ... + @TransValidateCallback(callType=TransValidateCBType.INIT) + public void init(DpTrans trans) throws DpCallbackException{ + try { + th = trans.thandle; + maapi.attach(th, new MyNamesapce().hash(), trans.uinfo.usid); + .. + } + catch(Exception e) { + throw new DpCallbackException("failed to attach via maapi: "+ e.getMessage()); + } + } } ``` -```` {% endcode %} -``` -\ -``` - * `stop()`: This callback is invoked when the validation phase ends. If `init()` attached to the transaction, `stop()` should detach from it. The actual validation logic is implemented in a validation callback: * `validate()`: This callback is invoked for a specific validation point. -### Transforms +#### Transforms Transforms implement a mapping between one part of the data model - the front-end of the transform - and another part - the back-end of the transform. Typically the front-end is visible to northbound interfaces, while the back-end is not, but for operational data (`config false` in the data model), a transform may implement a different view (e.g. aggregation) of data that is also visible without going through the transform. @@ -800,7 +794,7 @@ The implementation of a transform uses techniques already described in this sect To specify that the front-end data is provided by a transform, the data model uses the `tailf:callpoint` statement with a `tailf:transform true` substatement. Since transforms do not participate in the two-phase commit protocol, they only need to register the `init()` and `finish()` transaction callbacks. The `init()` callback attaches to the transaction and `finish()` detaches from it. Also, a transform for operational data only needs to register the data callbacks that read data, i.e. `getElem()`, `existsOptional()`, etc. -### Hooks +#### Hooks Hooks make it possible to have changes to the configuration trigger additional changes. In general, this should only be done when the data that is written by the hook is not visible to northbound interfaces since otherwise, the additional changes will make it difficult e.g. EMS or NMS systems to manage the configuration - the complete configuration resulting from a given change cannot be predicted. However, one use case in NSO for hooks that trigger visible changes is precisely to model-managed devices that have this behavior: hooks in the device model can emulate what the device does on certain configuration changes, and thus the device configuration in NSO remains in sync with the actual device configuration. @@ -808,11 +802,11 @@ The implementation technique for a hook is very similar to that for a transform. To specify that changes to some part of the configuration should trigger a hook invocation, the data model uses the `tailf:callpoint` statement with a `tailf:set-hook` or `tailf:transaction-hook` substatement. A set-hook is invoked immediately when a northbound agent requests a write operation on the data, while a transaction-hook is invoked when the transaction is committed. For the NSO-specific use case mentioned above, a `set-hook` should be used. The `tailf:set-hook` and `tailf:transaction-hook` statements take an argument specifying the extent of the data model the hook applies to. -## NED API +### NED API -NSO can speak southbound to an arbitrary management interface. This is of course not entirely automatic like with NETCONF or SNMP, and depending on the type of interface the device has for configuration, this may involve some programming. Devices with a Cisco-style CLI can however be managed by writing YANG models describing the data in the CLI, and a relatively thin layer of Java code to handle the communication to the devices. Refer to [Network Element Drivers (NEDs)](../../advanced-development/developing-neds/) for more information. +NSO can speak southbound to an arbitrary management interface. This is of course not entirely automatic like with NETCONF or SNMP, and depending on the type of interface the device has for configuration, this may involve some programming. Devices with a Cisco-style CLI can however be managed by writing YANG models describing the data in the CLI, and a relatively thin layer of Java code to handle the communication to the devices. Refer to Network Element Drivers (NEDs) for more information. -## NAVU API +### NAVU API The NAVU API provides a DOM-driven approach to navigate the NSO service and device models. The main features of the NAVU API are dynamic schema loading at start-up and lazy loading of instance data. The navigation model is based on the YANG language structure. In addition to navigation and reading of values, NAVU also provides methods to modify the data model. Furthermore, it supports the execution of actions modeled in the service model. @@ -822,7 +816,7 @@ NAVU requires all models i.e. the complete NSO service model with all its augmen The `ncsc` tool can also generate Java classes from the .yang files. These files, extending the `ConfNamespace` base class, are the Java representation of the models and contain all defined nametags and their corresponding hash values. These Java classes can, optionally, be used as help classes in the service applications to make NAVU navigation type-safe, e.g. eliminating errors from misspelled model container names. -

NAVU Design Support

+

NAVU Design Support

The service models are loaded at start-up and are always the latest version. The models are always traversed in a lazy fashion i.e. data is only loaded when it is needed. This is to minimize the amount of data transferred between NSO and the service applications. @@ -833,7 +827,7 @@ The most important classes of NAVU are the classes implementing the YANG node ty * `NavuListEntry`: list node entry. * `NavuLeaf`: the NavuLeaf represents a YANG leaf node. -

NAVU YANG Structure

+

NAVU YANG Structure

The remaining part of this section will guide us through the most useful features of the NAVU. Should further information be required, please refer to the corresponding Javadoc pages. @@ -853,7 +847,7 @@ module tailf-ncs { {% endcode %} {% code title="Example: NSO NavuContainer Instance" %} -``` +```java ..... NavuContext context = new NavuContext(maapi); context.startRunningTrans(Conf.MODE_READ); @@ -900,7 +894,7 @@ submodule tailf-ncs-devices { If the purpose is to directly access a list node, we would typically do a direct navigation to the list element using the NAVU primitives. {% code title="Example: NAVU List Direct Element Access" %} -``` +```java ..... NavuContext context = new NavuContext(maapi); context.startRunningTrans(Conf.MODE_READ); @@ -920,7 +914,7 @@ If the purpose is to directly access a list node, we would typically do a direct Or if we want to iterate over all elements of a list we could do as follows. {% code title="Example: NAVU List Element Iterating" %} -``` +```java ..... NavuContext context = new NavuContext(maapi); context.startRunningTrans(Conf.MODE_READ); @@ -943,7 +937,7 @@ The above example uses the `select()` which uses a recursive regexp match agains Alternatively, if the purpose is to drill down deep into a structure we should use `select()`. The `select()` offers a wild card-based search. The search is relative and can be performed from any node in the structure. {% code title="Example: NAVU Leaf Access" %} -``` +```java ..... NavuContext context = new NavuContext(maapi); context.startRunningTrans(Conf.MODE_READ); @@ -965,7 +959,7 @@ All of the above are valid ways of traversing the lists depending on the purpose An alternative method is to use the `xPathSelect()` where an XPath query could be issued instead. {% code title="Example: NAVU Leaf Access" %} -``` +```java ..... NavuContext context = new NavuContext(maapi); context.startRunningTrans(Conf.MODE_READ); @@ -1018,7 +1012,7 @@ module tailf-ncs { To read and update a leaf, we simply navigate to the leaf and request the value. And in the same manner, we can update the value. {% code title="Example: NAVU List Element Iterating" %} -``` +```java ..... NavuContext context = new NavuContext(maapi); context.startRunningTrans(Conf.MODE_READ); @@ -1094,7 +1088,7 @@ module interfaces { To execute the action below we need to access a device with this module loaded. This is done in a similar way to non-action nodes. {% code title="Example: NAVU Action Execution (1)" %} -``` +```java ..... NavuContext context = new NavuContext(maapi); context.startRunningTrans(Conf.MODE_READ); @@ -1140,7 +1134,7 @@ To execute the action below we need to access a device with this module loaded. Or, we could do it with `xPathSelect()`. {% code title="Example: NAVU Action Execution (2)" %} -``` +```java ..... NavuContext context = new NavuContext(maapi); context.startRunningTrans(Conf.MODE_READ); diff --git a/development/core-concepts/api-overview/python-api-overview.md b/development/core-concepts/api-overview/python-api-overview.md index cb2cbe00..c3c11a0d 100644 --- a/development/core-concepts/api-overview/python-api-overview.md +++ b/development/core-concepts/api-overview/python-api-overview.md @@ -1147,7 +1147,7 @@ print("/operdata/value is now %s" % new_value) The Python `_ncs.events` low-level module provides an API for subscribing to and processing NSO event notifications. Typically, the event notification API is used by applications that manage NSO using the SDK API using, for example, MAAPI or for debug purposes. In addition to subscribing to the various events, streams available over other northbound interfaces, such as NETCONF, RESTCONF, etc., can be subscribed to as well. -See [`examples.ncs/sdk-api/event-notifications`](https://github.com/NSO-developer/nso-examples/tree/6.5/sdk-api/event-notifications) for an example. The [`examples.ncs/common/event_notifications.py`](https://github.com/NSO-developer/nso-examples/tree/6.5/common/event_notifications.py) Python script used by the example can also be used as a standalone application to, for example, debug any NSO instance. +See [`examples.ncs/sdk-api/event-notifications`](https://github.com/NSO-developer/nso-examples/tree/6.6/sdk-api/event-notifications) for an example. The [`examples.ncs/common/event_notifications.py`](https://github.com/NSO-developer/nso-examples/tree/6.6/common/event_notifications.py) Python script used by the example can also be used as a standalone application to, for example, debug any NSO instance. ## Advanced Topics @@ -1228,3 +1228,75 @@ Functions and methods that accept the `load_schemas` argument: * `ncs.maapi.Maapi() constructor` * `ncs.maapi.single_read_trans()` * `ncs.maapi.single_write_trans()` + +### The way of using `multiprocessing.Process` +When using multiprocessing in NSO, the default start method is now `spawn` instead of `fork`. +With the `spawn` method, a new Python interpreter process is started, and all arguments passed to `multiprocessing.Process` must be picklable. + +If you pass Python objects that reference low-level C structures (for example `_ncs.dp.DaemonCtxRef` or `_ncs.UserInfo`), Python will raise an error like: + +```python +TypeError: cannot pickle '' object +``` + +{% code title="Example: using multiprocessing.Process" %} +```python +import ncs +import _ncs +from ncs.dp import Action +from multiprocessing import Process +import multiprocessing + +def child(uinfo, self): + print(f"uinfo: {uinfo}, self: {self}") + +class DoAction(Action): + @Action.action + def cb_action(self, uinfo, name, kp, input, output, trans): + t1 = multiprocessing.Process(target=child, args=(uinfo, self)) + t1.start() + +class Main(ncs.application.Application): + def setup(self): + self.log.info('Main RUNNING') + self.register_action('sleep', DoAction) + + def teardown(self): + self.log.info('Main FINISHED') +``` +{% endcode %} + +This happens because `self` and `uinfo` contain low-level C references that cannot be serialized (pickled) and sent to the child process. + +To fix this, avoid passing entire objects such as `self` or `uinfo` to the process. +Instead, pass only simple or primitive data types (like strings, integers, or dictionaries) that can be pickled. + +{% code title="Example: using multiprocessing.Process with primitive data" %} +```python +import ncs +import _ncs +from ncs.dp import Action +from multiprocessing import Process +import multiprocessing + +def child(usid, th, action_point): + print(f"uinfo: {usid}, th: {th}, action_point: {action_point}") + +class DoAction(Action): + @Action.action + def cb_action(self, uinfo, name, kp, input, output, trans): + usid = uinfo.usid + th = uinfo.actx_thandle + action_point = self.actionpoint + t1 = multiprocessing.Process(target=child, args=(usid,th,action_point,)) + t1.start() + +class Main(ncs.application.Application): + def setup(self): + self.log.info('Main RUNNING') + self.register_action('sleep', DoAction) + + def teardown(self): + self.log.info('Main FINISHED') +``` +{% endcode %} \ No newline at end of file diff --git a/development/core-concepts/implementing-services.md b/development/core-concepts/implementing-services.md index b262948f..b8083431 100644 --- a/development/core-concepts/implementing-services.md +++ b/development/core-concepts/implementing-services.md @@ -109,7 +109,7 @@ Bringing the two XML documents together gives the final `dns/templates/dns-templ ``` {% endcode %} -The service is now ready to use in NSO. Start the [examples.ncs/service-management/implement-a-service/dns-v1](https://github.com/NSO-developer/nso-examples/tree/6.5/service-management/implement-a-service/dns-v1) example to set up a live NSO system with such a service and inspect how it works. Try configuring two different instances of the `dns` service. +The service is now ready to use in NSO. Start the [examples.ncs/service-management/implement-a-service/dns-v1](https://github.com/NSO-developer/nso-examples/tree/6.6/service-management/implement-a-service/dns-v1) example to set up a live NSO system with such a service and inspect how it works. Try configuring two different instances of the `dns` service. ```bash $ cd $NCS_DIR/examples.ncs/service-management/implement-a-service/dns-v1 @@ -245,7 +245,7 @@ The remaining statements describe the functionality and input parameters that ar } ``` -Use the [examples.ncs/service-management/implement-a-service/dns-v2](https://github.com/NSO-developer/nso-examples/tree/6.5/service-management/implement-a-service/dns-v2) example to explore how this model works and try to discover what deficiencies it may have. +Use the [examples.ncs/service-management/implement-a-service/dns-v2](https://github.com/NSO-developer/nso-examples/tree/6.6/service-management/implement-a-service/dns-v2) example to explore how this model works and try to discover what deficiencies it may have. ```bash $ cd $NCS_DIR/examples.ncs/service-management/implement-a-service/dns-v2 @@ -322,7 +322,7 @@ The following figure captures the relationship between the YANG model and the XM

XML Template and Model Relationship

-The complete service is available in the [examples.ncs/service-management/implement-a-service/dns-v2.1](https://github.com/NSO-developer/nso-examples/tree/6.5/service-management/implement-a-service/dns-v2.1) example. Feel free to investigate on your own how it differs from the initial, no-validation service. +The complete service is available in the [examples.ncs/service-management/implement-a-service/dns-v2.1](https://github.com/NSO-developer/nso-examples/tree/6.6/service-management/implement-a-service/dns-v2.1) example. Feel free to investigate on your own how it differs from the initial, no-validation service. ```bash $ cd $NCS_DIR/examples.ncs/service-management/implement-a-service/dns-v2.1 @@ -518,7 +518,7 @@ You would typically create the service package skeleton with the `ncs-make-packa } ``` -The [examples.ncs/service-management/implement-a-service/iface-v1](https://github.com/NSO-developer/nso-examples/tree/6.5/service-management/implement-a-service/iface-v1) example contains the complete YANG module with this service model in the `packages/iface-v1/src/yang/iface.yang` file, as well as the corresponding service template in `packages/iface-v1/templates/iface-template.xml`. +The [examples.ncs/service-management/implement-a-service/iface-v1](https://github.com/NSO-developer/nso-examples/tree/6.6/service-management/implement-a-service/iface-v1) example contains the complete YANG module with this service model in the `packages/iface-v1/src/yang/iface.yang` file, as well as the corresponding service template in `packages/iface-v1/templates/iface-template.xml`. ## FASTMAP and Service Life Cycle @@ -708,7 +708,7 @@ The complete create code for the service is: template.apply('iface-template', vars) ``` -You can test it out in the [examples.ncs/service-management/implement-a-service/iface-v2-py](https://github.com/NSO-developer/nso-examples/tree/6.5/service-management/implement-a-service/iface-v2-py) example. +You can test it out in the [examples.ncs/service-management/implement-a-service/iface-v2-py](https://github.com/NSO-developer/nso-examples/tree/6.6/service-management/implement-a-service/iface-v2-py) example. ### Templates and Java Code @@ -799,7 +799,7 @@ The complete create code for the service is then: } ``` -You can test it out in the [examples.ncs/service-management/implement-a-service/iface-v2-java](https://github.com/NSO-developer/nso-examples/tree/6.5/service-management/implement-a-service/iface-v2-java) example. +You can test it out in the [examples.ncs/service-management/implement-a-service/iface-v2-java](https://github.com/NSO-developer/nso-examples/tree/6.6/service-management/implement-a-service/iface-v2-java) example. ## Configuring Multiple Devices @@ -859,7 +859,7 @@ It performs the same as the one, which loops through the devices explicitly: ``` -Being explicit, the latter is usually much easier to understand and maintain for most developers. The [examples.ncs/service-management/implement-a-service/dns-v3](https://github.com/NSO-developer/nso-examples/tree/6.5/service-management/implement-a-service/dns-v3) demonstrates this syntax in the XML template. +Being explicit, the latter is usually much easier to understand and maintain for most developers. The [examples.ncs/service-management/implement-a-service/dns-v3](https://github.com/NSO-developer/nso-examples/tree/6.6/service-management/implement-a-service/dns-v3) demonstrates this syntax in the XML template. ### Supporting Different Device Types @@ -926,7 +926,7 @@ In case you need to further limit what configuration applies where and namespace ``` -The preceding template applies configuration for the interface only if the selected device uses the `cisco-ios-cli-3.0` NED-ID. You can find the full code as part of the [examples.ncs/service-management/implement-a-service/iface-v3](https://github.com/NSO-developer/nso-examples/tree/6.5/service-management/implement-a-service/iface-v3) example. +The preceding template applies configuration for the interface only if the selected device uses the `cisco-ios-cli-3.0` NED-ID. You can find the full code as part of the [examples.ncs/service-management/implement-a-service/iface-v3](https://github.com/NSO-developer/nso-examples/tree/6.6/service-management/implement-a-service/iface-v3) example. ## Shared Service Settings and Auxiliary Data @@ -1061,7 +1061,7 @@ The following code, which performs the same thing but in a more verbose way, fur ``` -The complete service is available in the [examples.ncs/service-management/implement-a-service/dns-v3](https://github.com/NSO-developer/nso-examples/tree/6.5/service-management/implement-a-service/dns-v3) example. +The complete service is available in the [examples.ncs/service-management/implement-a-service/dns-v3](https://github.com/NSO-developer/nso-examples/tree/6.6/service-management/implement-a-service/dns-v3) example. ## Service Actions @@ -1154,7 +1154,7 @@ class Main(ncs.application.Application): self.register_action('iface-test-enabled', IfaceActions) ``` -You can test the action in the [examples.ncs/service-management/implement-a-service/iface-v4-py](https://github.com/NSO-developer/nso-examples/tree/6.5/service-management/implement-a-service/iface-v4-py) example. +You can test the action in the [examples.ncs/service-management/implement-a-service/iface-v4-py](https://github.com/NSO-developer/nso-examples/tree/6.6/service-management/implement-a-service/iface-v4-py) example. ### Action Code in Java @@ -1220,7 +1220,7 @@ The complete implementation requires you to supply your own Maapi read transacti } ``` -You can test the action in the [examples.ncs/service-management/implement-a-service/iface-v4-java](https://github.com/NSO-developer/nso-examples/tree/6.5/service-management/implement-a-service/iface-v4-java) example. +You can test the action in the [examples.ncs/service-management/implement-a-service/iface-v4-java](https://github.com/NSO-developer/nso-examples/tree/6.6/service-management/implement-a-service/iface-v4-java) example. ## Operational Data @@ -1383,7 +1383,7 @@ def init_oper_data(state): return state ``` -The [examples.ncs/service-management/implement-a-service/iface-v5-py](https://github.com/NSO-developer/nso-examples/tree/6.5/service-management/implement-a-service/iface-v5-py) example implements such code. +The [examples.ncs/service-management/implement-a-service/iface-v5-py](https://github.com/NSO-developer/nso-examples/tree/6.6/service-management/implement-a-service/iface-v5-py) example implements such code. ### Writing Operational Data in Java @@ -1448,7 +1448,7 @@ Another thing to keep in mind with operational data is that NSO by default does You can also register a custom `com.tailf.ncs.ApplicationComponent` class with the service application to populate the data on package load, if you are not using `tailf:persistent`. Please refer to [The Application Component Type](nso-virtual-machines/nso-java-vm.md#d5e1255) for details. -The [examples.ncs/service-management/implement-a-service/iface-v5-java](https://github.com/NSO-developer/nso-examples/tree/6.5/service-management/implement-a-service/iface-v5-java) example implements such code. +The [examples.ncs/service-management/implement-a-service/iface-v5-java](https://github.com/NSO-developer/nso-examples/tree/6.6/service-management/implement-a-service/iface-v5-java) example implements such code. ## Nano Services for Provisioning with Side Effects @@ -1461,7 +1461,7 @@ The services discussed previously in this section were modeled to give all requi * Allocating a resource from an external system, such as an IP address, or generating an authentication key file using an external command. It is impossible to do this allocation from within the normal FASTMAP `create()` code since there is no way to deallocate the resource on commit, abort, or failure and when deleting the service. Furthermore, the `create()` code runs within the transaction lock. The time spent in services `create()` code should be as short as possible. * The service requires the start of one or more Virtual Machines, Virtual Network Functions. The VMs do not yet exist, and the `create()` code needs to trigger something that starts the VMs, and then later, when the VMs are operational, configure them. -The basic concepts of nano services are covered in detail by [Nano Services for Staged Provisioning](nano-services.md). The example in [examples.ncs/getting-started/netsim-sshkey](https://github.com/NSO-developer/nso-examples/tree/6.5/getting-started/netsim-sshkey) implements SSH public key authentication setup using a nano service. The nano service uses the following steps in a plan that produces the `generated`, `distributed`, and `configured` states: +The basic concepts of nano services are covered in detail by [Nano Services for Staged Provisioning](nano-services.md). The example in [examples.ncs/getting-started/netsim-sshkey](https://github.com/NSO-developer/nso-examples/tree/6.6/getting-started/netsim-sshkey) implements SSH public key authentication setup using a nano service. The nano service uses the following steps in a plan that produces the `generated`, `distributed`, and `configured` states: 1. Generates the NSO SSH client authentication key files using the OpenSSH `ssh-keygen` utility from a nano service side-effect action implemented in Python. 2. Distributes the public key to the netsim (ConfD) network elements to be stored as an authorized key using a Python service `create()` callback. @@ -1470,7 +1470,7 @@ The basic concepts of nano services are covered in detail by [Nano Services for Upon deletion of the service instance, NSO restores the configuration. The only delete step in the plan is the `generated` state side-effect action that deletes the key files. The example is described in more detail in [Developing and Deploying a Nano Service](../../administration/installation-and-deployment/deployment/develop-and-deploy-a-nano-service.md). -The `basic-vrouter`, `netsim-vrouter`, and `mpls-vpn-vrouter` examples in the [examples.ncs/nano-services](https://github.com/NSO-developer/nso-examples/tree/6.5/nano-services) directory start, configure, and stop virtual devices. In addition, the `mpls-vpn-vrouter` example manages Layer3 VPNs in a service provider MPLS network consisting of physical and virtual devices. Using a Network Function Virtualization (NFV) setup, the L3VPN nano service instructs a VM manager nano service to start a virtual device in a multi-step process consisting of the following: +The `basic-vrouter`, `netsim-vrouter`, and `mpls-vpn-vrouter` examples in the [examples.ncs/nano-services](https://github.com/NSO-developer/nso-examples/tree/6.6/nano-services) directory start, configure, and stop virtual devices. In addition, the `mpls-vpn-vrouter` example manages Layer3 VPNs in a service provider MPLS network consisting of physical and virtual devices. Using a Network Function Virtualization (NFV) setup, the L3VPN nano service instructs a VM manager nano service to start a virtual device in a multi-step process consisting of the following: 1. When the L3VPN nano service `pe-create` state step create or delete a `/vm-manager/start` service configuration instance, the VM manager nano service instructs a VNF-M, called ESC, to start or stop the virtual device. 2. Wait for the ESC to start or stop the virtual device by monitoring and handling events. Here NETCONF notifications. @@ -1577,7 +1577,7 @@ You can use these general steps to give you a high-level idea of how to approach ``` \ - Trace ID can also be provided as a commit parameter in your service code, or as a RESTCONF query parameter. See [examples.ncs/sdk-api/maapi-commit-parameters](https://github.com/NSO-developer/nso-examples/tree/6.5/sdk-api/maapi-commit-parameters) for an example. + Trace ID can also be provided as a commit parameter in your service code, or as a RESTCONF query parameter. See [examples.ncs/sdk-api/maapi-commit-parameters](https://github.com/NSO-developer/nso-examples/tree/6.6/sdk-api/maapi-commit-parameters) for an example. 6. Measuring the time it takes for specific commands to complete can also give you some hints about what is going on. You can do this by using the `timecmd`, which requires the dev tools to be enabled. ```bash diff --git a/development/core-concepts/nano-services.md b/development/core-concepts/nano-services.md index b3fe781e..55ed59d0 100644 --- a/development/core-concepts/nano-services.md +++ b/development/core-concepts/nano-services.md @@ -10,7 +10,7 @@ Another limitation is that the service mapping code must not produce any side ef Nano services help you overcome these limitations. They implement a service as several smaller (nano) steps or stages, by using a technique called reactive FASTMAP (RFM), and provide a framework to safely execute actions with side effects. Reactive FASTMAP can also be implemented directly, using the CDB subscribers, but nano services offer a more streamlined and robust approach for staged provisioning. -The section starts by gradually introducing the nano service concepts in a typical use case. To aid readers working with nano services for the first time, some of the finer points are omitted in this part and discussed later on, in [Implementation Reference](nano-services.md#ug.nano_services.impl). The latter is designed as a reference to aid you during implementation, so it focuses on recapitulating the workings of nano services at the expense of examples. The rest of the chapter covers individual features with associated use cases and the complete working examples, which you may find in the [examples.ncs/nano-services](https://github.com/NSO-developer/nso-examples/tree/6.5/nano-services) folder. +The section starts by gradually introducing the nano service concepts in a typical use case. To aid readers working with nano services for the first time, some of the finer points are omitted in this part and discussed later on, in [Implementation Reference](nano-services.md#ug.nano_services.impl). The latter is designed as a reference to aid you during implementation, so it focuses on recapitulating the workings of nano services at the expense of examples. The rest of the chapter covers individual features with associated use cases and the complete working examples, which you may find in the [examples.ncs/nano-services](https://github.com/NSO-developer/nso-examples/tree/6.6/nano-services) folder. ## Basic Concepts @@ -30,7 +30,7 @@ For these reasons, service states are central to the design of a nano service. A By default, the plan outline consists of a single component, the `self` component, with the two states `init` and `ready`. It can be used to track the progress of the service as a whole. You can add any number of additional components and states to form the nano service. -The following YANG snippet, also part of the [examples.ncs/nano-services/basic-vrouter](https://github.com/NSO-developer/nso-examples/tree/6.5/nano-services/basic-vrouter) example, shows a plan outline with the two VM-provisioning states presented above: +The following YANG snippet, also part of the [examples.ncs/nano-services/basic-vrouter](https://github.com/NSO-developer/nso-examples/tree/6.6/nano-services/basic-vrouter) example, shows a plan outline with the two VM-provisioning states presented above: ```yang module vrouter { @@ -504,7 +504,7 @@ This is extremely useful, since you can access these values, as well as the ones ## Netsim Router Provisioning Example -The [examples.ncs/nano-services/netsim-vrouter](https://github.com/NSO-developer/nso-examples/tree/6.5/nano-services/netsim-vrouter) folder contains a complete implementation of a service that provisions a netsim device instance, onboards it to NSO, and pushes a sample interface configuration to the device. Netsim device creation is neither instantaneous nor side-effect-free and thus requires the use of a nano service. It more closely resembles a real-world use case for nano services. +The [examples.ncs/nano-services/netsim-vrouter](https://github.com/NSO-developer/nso-examples/tree/6.6/nano-services/netsim-vrouter) folder contains a complete implementation of a service that provisions a netsim device instance, onboards it to NSO, and pushes a sample interface configuration to the device. Netsim device creation is neither instantaneous nor side-effect-free and thus requires the use of a nano service. It more closely resembles a real-world use case for nano services. To see how the service is used through a prearranged scenario, execute the `make demo` command from the example folder. The scenario provisions and de-provisions multiple netsim devices to show different states and behaviors, characteristic of nano services. @@ -635,7 +635,7 @@ The built-in service-state-changes NETCONF/RESTCONF stream is used by NSO to gen When a service's plan component changes state, the `plan-state-change` notification is generated with the new state of the plan. It includes the status, which indicates one of not-reached, reached, or failed. The notification is sent when the state is `created`, `modified`, or `deleted`, depending on the configuration. For reference on the structure and all the fields present in the notification, please see the YANG model in the `tailf-ncs-plan.yang` file. -As a common use case, an event with status `reached` for the `self` component `ready` state signifies that all nano service components have reached their `ready` state and provisioning is complete. A simple example of this scenario is included in the [examples.ncs/nano-services/netsim-vrouter](https://github.com/NSO-developer/nso-examples/tree/6.5/nano-services/netsim-vrouter) `demo_rc.py` Python script, using RESTCONF. +As a common use case, an event with status `reached` for the `self` component `ready` state signifies that all nano service components have reached their `ready` state and provisioning is complete. A simple example of this scenario is included in the [examples.ncs/nano-services/netsim-vrouter](https://github.com/NSO-developer/nso-examples/tree/6.6/nano-services/netsim-vrouter) `demo_rc.py` Python script, using RESTCONF. To enable the plan-state-change notifications to be sent, you must enable them for a specific service in NSO. For example, can load the following configuration into the CDB as an XML initialization file: @@ -666,7 +666,7 @@ This configuration enables notifications for the self component's ready state wh When a service is committed through the commit queue, this notification acts as a reference regarding the state of the service. Notifications are sent when the service commit queue item is waiting to run, executing, waiting to be unlocked, completed, failed, or deleted. More details on the `service-commit-queue-event` notification content can be found in the YANG model inside `tailf-ncs-services.yang` . -For example, the `failed` event can be used to detect that a nano service instance deployment failed because a configuration change committed through the commit queue has failed. Measures to resolve the issue can then be taken and the nano service instance can be re-deployed. A simple example of this scenario is included in the [examples.ncs/nano-services/netsim-vrouter](https://github.com/NSO-developer/nso-examples/tree/6.5/nano-services/netsim-vrouter) `demo_rc.py` Python script where the service is committed through the commit queue, using RESTCONF. By design, the configuration commit to a device fails, resulting in a `commit-queue-notification` with the `failed` event status for the commit queue item. +For example, the `failed` event can be used to detect that a nano service instance deployment failed because a configuration change committed through the commit queue has failed. Measures to resolve the issue can then be taken and the nano service instance can be re-deployed. A simple example of this scenario is included in the [examples.ncs/nano-services/netsim-vrouter](https://github.com/NSO-developer/nso-examples/tree/6.6/nano-services/netsim-vrouter) `demo_rc.py` Python script where the service is committed through the commit queue, using RESTCONF. By design, the configuration commit to a device fails, resulting in a `commit-queue-notification` with the `failed` event status for the commit queue item. To enable the service-commit-queue-event notifications to be sent, you can load the following example configuration into NSO, as an XML initialization file or some other way: @@ -783,7 +783,7 @@ notification ### The `label` and `trace-id` in the Notification -You have likely noticed the `label` and `trace-id` fields in the example notifications above. The `label` is an optional but very useful parameter when committing the service configuration and the [Trace ID](../../administration/management/system-management/#d5e2587) is generated by NSO for each commit. They helps you correlate events from the commit in the emitted log messages and the `service-state-changes` stream notifications. The above notifications, taken from the [examples.ncs/nano-services/netsim-vrouter](https://github.com/NSO-developer/nso-examples/tree/6.5/nano-services/netsim-vrouter) example, are emitted after applying a RESTCONF plain patch: +You have likely noticed the `label` and `trace-id` fields in the example notifications above. The `label` is an optional but very useful parameter when committing the service configuration and the [Trace ID](../../administration/management/system-management/#d5e2587) is generated by NSO for each commit. They helps you correlate events from the commit in the emitted log messages and the `service-state-changes` stream notifications. The above notifications, taken from the [examples.ncs/nano-services/netsim-vrouter](https://github.com/NSO-developer/nso-examples/tree/6.6/nano-services/netsim-vrouter) example, are emitted after applying a RESTCONF plain patch: ``` $ curl -isu admin:admin -X PATCH @@ -1305,7 +1305,7 @@ The `service-commit-queue-event` helps detect that a nano service instance deplo ## Graceful Link Migration Example -You can find another nano service example under [examples.ncs/nano-services/link-migration](https://github.com/NSO-developer/nso-examples/tree/6.5/nano-services/link-migration). The example illustrates a situation with a simple VPN link that should be set up between two devices. The link is considered established only after it is tested and a `test-passed` leaf is set to `true`. If the VPN link changes, the new endpoints must be set up before removing the old endpoints, to avoid disturbing customer traffic during the operation. +You can find another nano service example under [examples.ncs/nano-services/link-migration](https://github.com/NSO-developer/nso-examples/tree/6.6/nano-services/link-migration). The example illustrates a situation with a simple VPN link that should be set up between two devices. The link is considered established only after it is tested and a `test-passed` leaf is set to `true`. If the VPN link changes, the new endpoints must be set up before removing the old endpoints, to avoid disturbing customer traffic during the operation. The package named `link` contains the nano service definition. The service has a list containing at most one element, which constitutes the VPN link and is keyed on a-device a-interface b-device b-interface. The list element corresponds to a component type `link:vlan-link` in the nano service plan. diff --git a/development/core-concepts/northbound-apis/nso-snmp-agent.md b/development/core-concepts/northbound-apis/nso-snmp-agent.md index 7224b8fb..cdc4b658 100644 --- a/development/core-concepts/northbound-apis/nso-snmp-agent.md +++ b/development/core-concepts/northbound-apis/nso-snmp-agent.md @@ -27,7 +27,7 @@ The usmHMACMD5AuthProtocol authentication protocol and the usmDESPrivProtocol pr The SNMP agent is configured through any of the normal NSO northbound interfaces. It is possible to control most aspects of the agent through for example the CLI. -The YANG models describing all configuration capabilities of the SNMP agent reside under `$NCS_DIR/src/ncs/snmp/snmp-agent-config/*.yang` in the NSO distribution. +The YANG models describing all configuration capabilities of the SNMP agent reside under `$NCS_DIR/src/ncs/snmp/snmp-agent-cfg/*.yang` in the NSO distribution. An example session configuring the SNMP agent through the CLI may look like: diff --git a/development/core-concepts/nso-concurrency-model.md b/development/core-concepts/nso-concurrency-model.md index 81dbe4fa..40cdf7cd 100644 --- a/development/core-concepts/nso-concurrency-model.md +++ b/development/core-concepts/nso-concurrency-model.md @@ -206,7 +206,7 @@ The same functionality is available in Java as well, as the `Maapi.ncsRunWithRet As an alternative option, available only in Python, you can use the `retry_on_conflict()` function decorator. -Example code for each of these approaches is shown next. In addition, the [examples.ncs/scaling-performance/conflict-retry](https://github.com/NSO-developer/nso-examples/tree/6.5/scaling-performance/conflict-retry) example showcases this functionality as part of a concrete service. +Example code for each of these approaches is shown next. In addition, the [examples.ncs/scaling-performance/conflict-retry](https://github.com/NSO-developer/nso-examples/tree/6.6/scaling-performance/conflict-retry) example showcases this functionality as part of a concrete service. ## Example Retrying Code in Python diff --git a/development/core-concepts/nso-virtual-machines/embedded-erlang-applications.md b/development/core-concepts/nso-virtual-machines/embedded-erlang-applications.md index 94ba50c4..98bb054d 100644 --- a/development/core-concepts/nso-virtual-machines/embedded-erlang-applications.md +++ b/development/core-concepts/nso-virtual-machines/embedded-erlang-applications.md @@ -50,4 +50,4 @@ The following config settings in the `.app` file are explicitly treated by NSO: ## Example -The [examples.ncs/service-management/rfs-service-erlang](https://github.com/NSO-developer/nso-examples/tree/6.5/service-management/rfs-service-erlang) example in the bundled collection shows how to create a service written in Erlang and execute it internally in NSO. This Erlang example is a subset of the Java example [examples.ncs/service-management/rfs-service](https://github.com/NSO-developer/nso-examples/tree/6.5/service-management/rfs-service). +The [examples.ncs/service-management/rfs-service-erlang](https://github.com/NSO-developer/nso-examples/tree/6.6/service-management/rfs-service-erlang) example in the bundled collection shows how to create a service written in Erlang and execute it internally in NSO. This Erlang example is a subset of the Java example [examples.ncs/service-management/rfs-service](https://github.com/NSO-developer/nso-examples/tree/6.6/service-management/rfs-service). diff --git a/development/core-concepts/nso-virtual-machines/nso-python-vm.md b/development/core-concepts/nso-virtual-machines/nso-python-vm.md index aa550671..669bf654 100644 --- a/development/core-concepts/nso-virtual-machines/nso-python-vm.md +++ b/development/core-concepts/nso-virtual-machines/nso-python-vm.md @@ -482,5 +482,5 @@ Using virtual environments with NSO Python packages provides several advantages: * Check the Python VM log, `ncs-python-vm.log`, for activation messages to verify the Python virtual environment used by the NSO package. {% hint style="info" %} -The [examples.ncs/misc/py-venv-package](https://github.com/NSO-developer/nso-examples/tree/main/misc/py-venv-package) example demonstrates how to either install Python package dependencies in the NSO package `python` directory, or as an alternative, use a Python virtual environment to manage dependencies that automatically activates when the Python VM for a package starts. +The [examples.ncs/misc/py-venv-package](https://github.com/NSO-developer/nso-examples/tree/6.6/misc/py-venv-package) example demonstrates how to either install Python package dependencies in the NSO package `python` directory, or as an alternative, use a Python virtual environment to manage dependencies that automatically activates when the Python VM for a package starts. {% endhint %} diff --git a/development/core-concepts/packages.md b/development/core-concepts/packages.md index 81c2f870..40eb5652 100644 --- a/development/core-concepts/packages.md +++ b/development/core-concepts/packages.md @@ -39,7 +39,7 @@ The optional `webui` directory contains the WEB UI customization files. ## An Example Package -The NSO example collection for contains a number of small self-contained examples. The collection resides at `$NCS_DIR/examples.ncs` Each of these examples defines a package. Let's take a look at some of these packages. The example [examples.ncs/device-management/aggregated-stats](https://github.com/NSO-developer/nso-examples/tree/6.5/device-management/aggregated-stats) has a package `./packages/stats`. The `package-meta-data.xml` file for that package looks like this: +The NSO example collection for contains a number of small self-contained examples. The collection resides at `$NCS_DIR/examples.ncs` Each of these examples defines a package. Let's take a look at some of these packages. The example [examples.ncs/device-management/aggregated-stats](https://github.com/NSO-developer/nso-examples/tree/6.6/device-management/aggregated-stats) has a package `./packages/stats`. The `package-meta-data.xml` file for that package looks like this: {% code title="An Example Package" %} ```xml @@ -168,7 +168,7 @@ submodule: tailf-ncs-packages (belongs-to tailf-ncs) The order of the XML entries in a `package-meta-data.xml` must be in the same order as the model shown above. {% endhint %} -A sample package configuration is taken from the [examples.ncs/nano-services/netsim-vrouter](https://github.com/NSO-developer/nso-examples/tree/6.5/nano-services/netsim-vrouter) example: +A sample package configuration is taken from the [examples.ncs/nano-services/netsim-vrouter](https://github.com/NSO-developer/nso-examples/tree/6.6/nano-services/netsim-vrouter) example: ```bash $ ncs_load -o -Fp -p /packages @@ -245,7 +245,7 @@ Below is a brief list of the configurables in the `tailf-ncs-packages.yang` YANG * `directory` - the path to the directory of the package. * `templates` - the templates defined by the package. * `template-loading-mode` - control if the templates are interpreted in strict or relaxed mode. -* `supported-ned-id` - the list of ned-ids supported by this package. An example of the expected format taken from the [examples.ncs/nano-services/netsim-vrouter](https://github.com/NSO-developer/nso-examples/tree/6.5/nano-services/netsim-vrouter) example: +* `supported-ned-id` - the list of ned-ids supported by this package. An example of the expected format taken from the [examples.ncs/nano-services/netsim-vrouter](https://github.com/NSO-developer/nso-examples/tree/6.6/nano-services/netsim-vrouter) example: ```xml @@ -300,12 +300,12 @@ A Network Element Driver component is used southbound of NSO to communicate with There are four different types of NEDs: -* **NETCONF**: used for NETCONF-enabled devices such as Juniper routers, ConfD-powered devices, or any device that speaks proper NETCONF and also has YANG models. Plenty of packages in the NSO example collection have NETCONF NED components, for example, [examples.ncs/device-management/router-network](https://github.com/NSO-developer/nso-examples/tree/6.5/device-management/router-network) under `packages/router`. +* **NETCONF**: used for NETCONF-enabled devices such as Juniper routers, ConfD-powered devices, or any device that speaks proper NETCONF and also has YANG models. Plenty of packages in the NSO example collection have NETCONF NED components, for example, [examples.ncs/device-management/router-network](https://github.com/NSO-developer/nso-examples/tree/6.6/device-management/router-network) under `packages/router`. * **SNMP**: Used for SNMP devices. - The example [examples.ncs/device-management/snmp-ned](https://github.com/NSO-developer/nso-examples/tree/6.5/device-management/snmp-ned) has a package that has an SNMP NED component. -* **CLI**: used for CLI devices. The [examples.ncs/device-management/cli-ned](https://github.com/NSO-developer/nso-examples/tree/6.5/device-management/cli-ned) example has a package called `router-cli-1.0` that defines a NED component of type CLI. -* **Generic**: used for generic NED devices. The example [examples.ncs/device-management/](https://github.com/NSO-developer/nso-examples/tree/6.5/device-management/xmlrpc-device)[generic-ned](https://github.com/NSO-developer/nso-examples/tree/6.5/device-management/generic-ned) has a package called `xml-rpc` which defines a NED component of type generic. + The example [examples.ncs/device-management/snmp-ned](https://github.com/NSO-developer/nso-examples/tree/6.6/device-management/snmp-ned) has a package that has an SNMP NED component. +* **CLI**: used for CLI devices. The [examples.ncs/device-management/cli-ned](https://github.com/NSO-developer/nso-examples/tree/6.6/device-management/cli-ned) example has a package called `router-cli-1.0` that defines a NED component of type CLI. +* **Generic**: used for generic NED devices. The example [examples.ncs/device-management/](https://github.com/NSO-developer/nso-examples/tree/6.6/device-management/xmlrpc-device)[generic-ned](https://github.com/NSO-developer/nso-examples/tree/6.6/device-management/generic-ned) has a package called `xml-rpc` which defines a NED component of type generic. A CLI NED and a generic NED component must also come with additional user-written Java code, whereas a NETCONF NED and an SNMP NED have no Java code. @@ -330,29 +330,29 @@ The `Stats` class here implements a read-only data provider. See [DP API](api-ov The `callback` type of component is used for a wide range of callback-type Java applications, where one of the most important are the Service Callbacks. The following list of Java callback annotations applies to callback components. -* `ServiceCallback` to implement service-to-device mappings. See the example: [examples.ncs/service-management/rfs-service](https://github.com/NSO-developer/nso-examples/tree/6.5/service-management/rfs-service) See [Developing NSO Services](../advanced-development/developing-services/) for a thorough introduction to services. -* `ActionCallback` to implement user-defined `tailf:actions` or YANG RPC and actions. See the examples: [examples.ncs/sdk-api/actions-python](https://github.com/NSO-developer/nso-examples/tree/6.5/sdk-api/actions-py) and [examples.ncs/sdk-api/actions-java](https://github.com/NSO-developer/nso-examples/tree/6.5/sdk-api/actions-java). -* `DataCallback` to implement the data getters and setters for a data provider. See the example [examples.ncs/device-management/aggregated-stats](https://github.com/NSO-developer/nso-examples/tree/6.5/device-management/aggregated-stats). -* `TransCallback` to implement the transaction portions of a data provider callback. See the example [examples.ncs/device-management/aggregated-stats](https://github.com/NSO-developer/nso-examples/tree/6.5/device-management/aggregated-stats). -* `DBCallback` to implement an external database. See the example: [examples.ncs/sdk-api/external-db](https://github.com/NSO-developer/nso-examples/tree/6.5/sdk-api/external-db). -* `SnmpInformResponseCallback` to implement an SNMP listener - See the example [examples.ncs/device-management/snmp-notification-receiver](https://github.com/NSO-developer/nso-examples/tree/6.5/device-management/snmp-notification-receiver). +* `ServiceCallback` to implement service-to-device mappings. See the example: [examples.ncs/service-management/rfs-service](https://github.com/NSO-developer/nso-examples/tree/6.6/service-management/rfs-service) See [Developing NSO Services](../advanced-development/developing-services/) for a thorough introduction to services. +* `ActionCallback` to implement user-defined `tailf:actions` or YANG RPC and actions. See the examples: [examples.ncs/sdk-api/actions-python](https://github.com/NSO-developer/nso-examples/tree/6.6/sdk-api/actions-py) and [examples.ncs/sdk-api/actions-java](https://github.com/NSO-developer/nso-examples/tree/6.6/sdk-api/actions-java). +* `DataCallback` to implement the data getters and setters for a data provider. See the example [examples.ncs/device-management/aggregated-stats](https://github.com/NSO-developer/nso-examples/tree/6.6/device-management/aggregated-stats). +* `TransCallback` to implement the transaction portions of a data provider callback. See the example [examples.ncs/device-management/aggregated-stats](https://github.com/NSO-developer/nso-examples/tree/6.6/device-management/aggregated-stats). +* `DBCallback` to implement an external database. See the example: [examples.ncs/sdk-api/external-db](https://github.com/NSO-developer/nso-examples/tree/6.6/sdk-api/external-db). +* `SnmpInformResponseCallback` to implement an SNMP listener - See the example [examples.ncs/device-management/snmp-notification-receiver](https://github.com/NSO-developer/nso-examples/tree/6.6/device-management/snmp-notification-receiver). * `TransValidateCallback`_,_ `ValidateCallback` to implement a user-defined validation hook that gets invoked on every commit. * `AuthCallback` to implement a user hook that gets called whenever a user is authenticated by the system. * `AuthorizationCallback` to implement an authorization hook that allows/disallows users to do operations and/or access data. Note, that this callback should normally be avoided since, by nature, invoking a callback for any operation and/or data element is a performance impairment. -A package that has a `callback` component usually has some YANG code and then also some Java code that relates to that YANG code. By convention, the YANG and the Java code reside in a src directory in the component. When the source of the package is built, any resulting `fxs` files (compiled YANG files) must reside in the `load-dir` of package and any resulting Java compilation results must reside in the `shared-jar` and `private-jar` directories. Study the [examples.ncs/device-management/aggregated-stats](https://github.com/NSO-developer/nso-examples/tree/6.5/device-management/aggregated-stats) example to see how this is achieved. +A package that has a `callback` component usually has some YANG code and then also some Java code that relates to that YANG code. By convention, the YANG and the Java code reside in a src directory in the component. When the source of the package is built, any resulting `fxs` files (compiled YANG files) must reside in the `load-dir` of package and any resulting Java compilation results must reside in the `shared-jar` and `private-jar` directories. Study the [examples.ncs/device-management/aggregated-stats](https://github.com/NSO-developer/nso-examples/tree/6.6/device-management/aggregated-stats) example to see how this is achieved. #### Application Used to cover Java applications that do not fit into the callback type. Typically this is functionality that should be running in separate threads and work autonomously. -The example [examples.ncs/sdk-api/cdb-java](https://github.com/NSO-developer/nso-examples/tree/6.5/sdk-api/cdb-java) contains three components that are of type `application`. These components must also contain a `java-class-name` element. For application components, that Java class must implement the `ApplicationComponent` Java interface. +The example [examples.ncs/sdk-api/cdb-java](https://github.com/NSO-developer/nso-examples/tree/6.6/sdk-api/cdb-java) contains three components that are of type `application`. These components must also contain a `java-class-name` element. For application components, that Java class must implement the `ApplicationComponent` Java interface. #### Upgrade Used to migrate data for packages where the yang model has changed and the automatic CDB upgrade is not sufficient. The upgrade component consists of a Java class with a main method that is expected to run one time only. -The example [examples.ncs/service-management/upgrade-service](https://github.com/NSO-developer/nso-examples/tree/6.5/service-management/upgrade-service) illustrates user CDB upgrades using `upgrade` components. +The example [examples.ncs/service-management/upgrade-service](https://github.com/NSO-developer/nso-examples/tree/6.6/service-management/upgrade-service) illustrates user CDB upgrades using `upgrade` components. ## Creating Packages @@ -421,10 +421,10 @@ Assuming we have a set of MIB files in `./mibs`, we can generate a package for a ### Creating a CLI NED Package or a Generic NED Package -For CLI NEDs and Generic NEDs, we cannot (yet) generate the package. Probably the best option for such packages is to start with one of the examples. A good starting point for a CLI NED is the [examples.ncs/device-management/cli-ned](https://github.com/NSO-developer/nso-examples/tree/6.5/device-management/cli-ned) and a good starting point for a Generic NED is the example [examples.ncs/device-management/generic-ned](https://github.com/NSO-developer/nso-examples/tree/6.5/device-management/generic-ned). +For CLI NEDs and Generic NEDs, we cannot (yet) generate the package. Probably the best option for such packages is to start with one of the examples. A good starting point for a CLI NED is the [examples.ncs/device-management/cli-ned](https://github.com/NSO-developer/nso-examples/tree/6.6/device-management/cli-ned) and a good starting point for a Generic NED is the example [examples.ncs/device-management/generic-ned](https://github.com/NSO-developer/nso-examples/tree/6.6/device-management/generic-ned). ### Creating a Service Package or a Data Provider Package The `ncs-make-package` can be used to generate empty skeleton packages for a data provider and a simple service. The flags `--service-skeleton` and `--data-provider-skeleton`. -Alternatively, one of the examples can be modified to provide a good starting point. For example [examples.ncs/service-management/rfs-service](https://github.com/NSO-developer/nso-examples/tree/6.5/service-management/rfs-service). +Alternatively, one of the examples can be modified to provide a good starting point. For example [examples.ncs/service-management/rfs-service](https://github.com/NSO-developer/nso-examples/tree/6.6/service-management/rfs-service). diff --git a/development/core-concepts/templates.md b/development/core-concepts/templates.md index 38059b6f..2f53ebd4 100644 --- a/development/core-concepts/templates.md +++ b/development/core-concepts/templates.md @@ -181,7 +181,7 @@ The action takes a number of arguments to control how the resulting template loo * `import-user-modules` - Import device YANG modules and their defined types in the generated YANG module. * `collapse-list-keys` - Decides what lists to parameterize, either `all`, `automatic` (default), or those specified by the `list-path` parameter. The default is to find lists that differ among the device configurations. -The [examples.ncs/service-management/implement-a-service/dns-v3](https://github.com/NSO-developer/nso-examples/tree/6.5/service-management/implement-a-service/dns-v3) environment can be used to try the command. +The [examples.ncs/service-management/implement-a-service/dns-v3](https://github.com/NSO-developer/nso-examples/tree/6.6/service-management/implement-a-service/dns-v3) environment can be used to try the command. {% code overflow="wrap" %} ```bash @@ -920,7 +920,7 @@ $ ncs_cmd -c "x /devices/device[name='c0']/config/ios:interface/FastEthernet/nam ### Example Debug Template Output -The following text walks through the output of the `debug template` command for a dns-v3 example service, found in [examples.ncs/service-management/implement-a-service/dns-v3](https://github.com/NSO-developer/nso-examples/tree/6.5/service-management/implement-a-service/dns-v3). To try it out for yourself, start the example with `make demo` and configure a service instance: +The following text walks through the output of the `debug template` command for a dns-v3 example service, found in [examples.ncs/service-management/implement-a-service/dns-v3](https://github.com/NSO-developer/nso-examples/tree/6.6/service-management/implement-a-service/dns-v3). To try it out for yourself, start the example with `make demo` and configure a service instance: ```bash admin@ncs# config diff --git a/development/core-concepts/using-cdb.md b/development/core-concepts/using-cdb.md index a836dc27..00fd2a2c 100644 --- a/development/core-concepts/using-cdb.md +++ b/development/core-concepts/using-cdb.md @@ -13,9 +13,9 @@ The figure below illustrates the architecture of when the CDB is used. The Appli

NSO CDB Architecture Scenario

-While CDB is the default data store for configuration data in NSO, it is possible to use an external database, if needed. See the example [examples.ncs/sdk-api/external-db](https://github.com/NSO-developer/nso-examples/tree/6.5/sdk-api/external-db) for details. +While CDB is the default data store for configuration data in NSO, it is possible to use an external database, if needed. See the example [examples.ncs/sdk-api/external-db](https://github.com/NSO-developer/nso-examples/tree/6.6/sdk-api/external-db) for details. -In the following, we will use the files in [examples.ncs/service-management/mpls-vpn-java](https://github.com/NSO-developer/nso-examples/tree/6.5/service-management/mpls-vpn-java) as a source for our examples. Refer to `README` in that directory for additional details. +In the following, we will use the files in [examples.ncs/service-management/mpls-vpn-java](https://github.com/NSO-developer/nso-examples/tree/6.6/service-management/mpls-vpn-java) as a source for our examples. Refer to `README` in that directory for additional details. ## The NSO Data Model @@ -266,17 +266,17 @@ Since write operations that do not attempt to obtain the subscription lock are a ## Example -We will take a first look at the [examples.ncs/sdk-api/cdb-java](https://github.com/NSO-developer/nso-examples/tree/6.5/sdk-api/cdb-java) example. This example is an NSO project with two packages: `cdb` and `router`. +We will take a first look at the [examples.ncs/sdk-api/cdb-java](https://github.com/NSO-developer/nso-examples/tree/6.6/sdk-api/cdb-java) example. This example is an NSO project with two packages: `cdb` and `router`. ### Example packages -* `router`: A NED package with a simple but still realistic model of a network device. The only component in this package is the NED component that uses NETCONF to communicate with the device. This package is used in many NSO examples including [examples.ncs/device-management/router-network](https://github.com/NSO-developer/nso-examples/tree/6.5/device-management/router-network) which is an introduction to NSO device manager, NSO netsim, and this router package. +* `router`: A NED package with a simple but still realistic model of a network device. The only component in this package is the NED component that uses NETCONF to communicate with the device. This package is used in many NSO examples including [examples.ncs/device-management/router-network](https://github.com/NSO-developer/nso-examples/tree/6.6/device-management/router-network) which is an introduction to NSO device manager, NSO netsim, and this router package. * `cdb`: This package has an even simpler YANG model to illustrate some aspects of CDB data retrieval. The package consists of five application components: * Plain CDB Subscriber: This CDB subscriber subscribes to changes under the path `/devices/device{ex0}/config`. Whenever a change occurs there, the code iterates through the change and prints the values. * CdbCfgSubscriber: A more advanced CDB subscriber that subscribes to changes under the path `/devices/device/config/sys/interfaces/interface`. * OperSubscriber: An operational data subscriber that subscribes to changes under the path `/t:test/stats-item`. -The [examples.ncs/sdk-api/cdb-py](https://github.com/NSO-developer/nso-examples/tree/6.5/sdk-api/cdb-py) and [examples.ncs/sdk-api/cdb-java](https://github.com/NSO-developer/nso-examples/tree/6.5/sdk-api/cdb-java) examples `packages/cdb` package includes the YANG model in the in the example below:. +The [examples.ncs/sdk-api/cdb-py](https://github.com/NSO-developer/nso-examples/tree/6.6/sdk-api/cdb-py) and [examples.ncs/sdk-api/cdb-java](https://github.com/NSO-developer/nso-examples/tree/6.6/sdk-api/cdb-java) examples `packages/cdb` package includes the YANG model in the in the example below:. {% code title="Example: Simple Config Data" %} ```yang @@ -533,7 +533,7 @@ The `finish()` method (Example below (Plain Subscriber `finish`)) is called when ``` {% endcode %} -We will now compile and start the [examples.ncs/sdk-api/cdb-java](https://github.com/NSO-developer/nso-examples/tree/6.5/sdk-api/cdb-java) example, populate some config data, and look at the result. The example below (Plain Subscriber Startup) shows how to do this. +We will now compile and start the [examples.ncs/sdk-api/cdb-java](https://github.com/NSO-developer/nso-examples/tree/6.6/sdk-api/cdb-java) example, populate some config data, and look at the result. The example below (Plain Subscriber Startup) shows how to do this. {% code title="Example: Plain Subscriber Startup" %} ```bash @@ -583,7 +583,7 @@ NAME ``` {% endcode %} -We have now added a server to the Syslog. What remains is to check what our 'Plain CDB Subscriber' `ApplicationComponent` got as a result of this update. In the `logs` directory of the [examples.ncs/sdk-api/cdb-java](https://github.com/NSO-developer/nso-examples/tree/6.5/sdk-api/cdb-java) example there is a file named `PlainCdbSub.out` which contains the log data from this application component. At the beginning of this file, a lot of logging is performed which emanates from the `sync-from` of the device. At the end of this file, we can find the three log rows that come from our update. See the extract in the example below (Plain Subscriber Output) (with each row split over several to fit on the page). +We have now added a server to the Syslog. What remains is to check what our 'Plain CDB Subscriber' `ApplicationComponent` got as a result of this update. In the `logs` directory of the [examples.ncs/sdk-api/cdb-java](https://github.com/NSO-developer/nso-examples/tree/6.6/sdk-api/cdb-java) example there is a file named `PlainCdbSub.out` which contains the log data from this application component. At the beginning of this file, a lot of logging is performed which emanates from the `sync-from` of the device. At the end of this file, we can find the three log rows that come from our update. See the extract in the example below (Plain Subscriber Output) (with each row split over several to fit on the page). {% code title="Example: Plain Subscriber Output" %} ``` @@ -678,7 +678,7 @@ If we look at the file `logs/ConfigCdbSub.out`, we will find log records from th ### Operational Data -We will look once again at the YANG model for the CDB package in the [examples.ncs/sdk-api/cdb-java](https://github.com/NSO-developer/nso-examples/tree/6.5/sdk-api/cdb-java) example. Inside the `test.yang` YANG model, there is a `test` container. As a child in this container, there is a list `stats-item` (see the example below (CDB Simple Operational Data). +We will look once again at the YANG model for the CDB package in the [examples.ncs/sdk-api/cdb-java](https://github.com/NSO-developer/nso-examples/tree/6.6/sdk-api/cdb-java) example. Inside the `test.yang` YANG model, there is a `test` container. As a child in this container, there is a list `stats-item` (see the example below (CDB Simple Operational Data). {% code title="Example: CDB Simple Operational Data" %} ```yang @@ -762,7 +762,7 @@ An example of Java code that deletes operational data using the CDB API is shown ``` {% endcode %} -In the [examples.ncs/sdk-api/cdb-java](https://github.com/NSO-developer/nso-examples/tree/6.5/sdk-api/cdb-java) example the `cdb` package, there is also an application component with an operational data subscriber that subscribes to data from the path `"/t:test/stats-item"` (see the example below (CDB Operational Subscriber Java code)). +In the [examples.ncs/sdk-api/cdb-java](https://github.com/NSO-developer/nso-examples/tree/6.6/sdk-api/cdb-java) example the `cdb` package, there is also an application component with an operational data subscriber that subscribes to data from the path `"/t:test/stats-item"` (see the example below (CDB Operational Subscriber Java code)). {% code title="Example: CDB Operational Subscriber Java code" %} ```java @@ -858,7 +858,7 @@ public class OperCdbSub implements ApplicationComponent, CdbDiffIterate { Notice that the `CdbOperSubscriber` is very similar to the `CdbConfigSubscriber` described earlier. -In the [examples.ncs/sdk-api/cdb-py](https://github.com/NSO-developer/nso-examples/tree/6.5/sdk-api/cdb-py) and [examples.ncs/sdk-api/cdb-java](https://github.com/NSO-developer/nso-examples/tree/6.5/sdk-api/cdb-java) examples, there are two shell scripts `setoper` and `deloper` that will execute the above `CreateEntry()` and `DeleteEntry()` respectively. We can use these to populate the operational data in CDB for the `test.yang` YANG model (see the example below (Populating Operational Data)). +In the [examples.ncs/sdk-api/cdb-py](https://github.com/NSO-developer/nso-examples/tree/6.6/sdk-api/cdb-py) and [examples.ncs/sdk-api/cdb-java](https://github.com/NSO-developer/nso-examples/tree/6.6/sdk-api/cdb-java) examples, there are two shell scripts `setoper` and `deloper` that will execute the above `CreateEntry()` and `DeleteEntry()` respectively. We can use these to populate the operational data in CDB for the `test.yang` YANG model (see the example below (Populating Operational Data)). {% code title="Example: Populating Operational Data" %} ```bash @@ -1160,7 +1160,7 @@ So how should an upgrade component be implemented? In the previous section, we d So the CDB Java/Python API can be used to read data defined by the old YANG models. To write new config data Maapi has a specific method `Maapi.attachInit()`. This method attaches a Maapi instance to the upgrade transaction (or init transaction) during `phase0`. This special upgrade transaction is only available during `phase0`. NSO will commit this transaction when the `phase0` is ended, so the user should only write config data (not attempt to commit, etc.). -We take a look at the example [examples.ncs/service-management/upgrade-service](https://github.com/NSO-developer/nso-examples/tree/6.5/service-management/upgrade-service) to see how an upgrade component can be implemented. Here the _vlan_ package has an original version which is replaced with a version `vlan_v2`. See the `vlan_v2-py` package for a Python variant. See the `README` and play with examples to get acquainted. +We take a look at the example [examples.ncs/service-management/upgrade-service](https://github.com/NSO-developer/nso-examples/tree/6.6/service-management/upgrade-service) to see how an upgrade component can be implemented. Here the _vlan_ package has an original version which is replaced with a version `vlan_v2`. See the `vlan_v2-py` package for a Python variant. See the `README` and play with examples to get acquainted. {% hint style="info" %} The `upgrade-service` is a `service` package upgrade example. But the upgrade components here described work equally well and in the same way for any package type. The only requirement is that the package contain at least one YANG model for the upgrade component to have meaning. If not the upgrade component will never be executed. @@ -1388,7 +1388,7 @@ At the end of the program, the sockets are closed. Important to note is that no

NSO Advanced Service Upgrade

-In the [examples.ncs/service-management/upgrade-service](https://github.com/NSO-developer/nso-examples/tree/6.5/service-management/upgrade-service) example, this more complicated scenario is illustrated with the `tunnel` package. See the `tunnel-py` package for a Python variant. The `tunnel` package YANG model maps the `vlan_v2` package one-to-one but is a complete rename of the model containers and all leafs: +In the [examples.ncs/service-management/upgrade-service](https://github.com/NSO-developer/nso-examples/tree/6.6/service-management/upgrade-service) example, this more complicated scenario is illustrated with the `tunnel` package. See the `tunnel-py` package for a Python variant. The `tunnel` package YANG model maps the `vlan_v2` package one-to-one but is a complete rename of the model containers and all leafs: {% code title="Example: Tunnel Service YANG Model" %} ```yang diff --git a/development/introduction-to-automation/applications-in-nso.md b/development/introduction-to-automation/applications-in-nso.md index bd5bb35c..952f2ccb 100644 --- a/development/introduction-to-automation/applications-in-nso.md +++ b/development/introduction-to-automation/applications-in-nso.md @@ -134,7 +134,7 @@ The last thing to note in the above action code definition is the use of the dec ## Showcase - Implementing Device Count Action {% hint style="info" %} -See [examples.ncs/getting-started/applications-nso](https://github.com/NSO-developer/nso-examples/blob/6.4/getting-started/applications-nso) for an example implementation. +See [examples.ncs/getting-started/applications-nso](https://github.com/NSO-developer/nso-examples/tree/6.6/getting-started/applications-nso) for an example implementation. {% endhint %} ### Prerequisites @@ -387,7 +387,7 @@ result 3 You can use the `show devices list` command to verify that the result is correct. You can alter the address of any device and see how it affects the result. You can even use a hostname, such as `localhost`. {% hint style="info" %} -Other examples of action implementations can be found under [examples.ncs/sdk-api](https://github.com/NSO-developer/nso-examples/tree/main/sdk-api). +Other examples of action implementations can be found under [examples.ncs/sdk-api](https://github.com/NSO-developer/nso-examples/tree/6.6/sdk-api). {% endhint %} ## Overview of Extension Points @@ -462,7 +462,7 @@ There are some important points worth noting for action timeout: * Implementing your own abort action callback in `cb_abort` allows you to handle actions that are timing out. If `cb_abort` is not defined, NSO cannot trigger the abort action during a timeout, preventing it from unlocking the action for a user session. Consequently, you must wait for the action callback to finish before attempting it again. {% hint style="info" %} -See [examples.ncs/sdk-api/action-abort-py](https://github.com/NSO-developer/nso-examples/tree/main/sdk-api/action-abort-py) for an example of how to implement an abortable Python action that spawns a separate worker process using the multiprocessing library and returns the worker's outcome via a result queue or terminates the worker if the action is aborted. +See [examples.ncs/sdk-api/action-abort-py](https://github.com/NSO-developer/nso-examples/tree/6.6/sdk-api/action-abort-py) for an example of how to implement an abortable Python action that spawns a separate worker process using the multiprocessing library and returns the worker's outcome via a result queue or terminates the worker if the action is aborted. {% endhint %} For NSO operational data queries, NSO uses `query-timeout` to ensure the data provider return operational data within the given time. If the data provider fails to do so within the stipulated timeout, NSO will close its end of the control socket to the data provider. The NSO VMs will detect the socket close and exit. @@ -477,7 +477,7 @@ As your NSO application evolves, you will create newer versions of your applicat When you replace a package, NSO must redeploy the application code and potentially replace the package-provided part of the YANG schema. For the latter, NSO can perform the data migration for you, as long as the schema is backward compatible. This process is documented in [Automatic Schema Upgrades and Downgrades](../core-concepts/using-cdb.md#ug.cdb.upgrade) and is automatic when you request a reload of the package with `packages reload` or a similar command. -If your schema changes are not backward compatible, you can implement a data migration procedure, which NSO invokes when upgrading the schema. Among other things, this allows you to reuse and migrate the data that is no longer present in the new schema. You can specify the migration procedure as part of the `package-meta-data.xml` file, using a component of the `upgrade` type. See [The Upgrade Component](../core-concepts/nso-virtual-machines/nso-python-vm.md#ncs.development.pythonvm.upgrade) (Python) and [examples.ncs/service-management/upgrade-service](https://github.com/NSO-developer/nso-examples/tree/6.5/service-management/upgrade-service) example (Java) for details. +If your schema changes are not backward compatible, you can implement a data migration procedure, which NSO invokes when upgrading the schema. Among other things, this allows you to reuse and migrate the data that is no longer present in the new schema. You can specify the migration procedure as part of the `package-meta-data.xml` file, using a component of the `upgrade` type. See [The Upgrade Component](../core-concepts/nso-virtual-machines/nso-python-vm.md#ncs.development.pythonvm.upgrade) (Python) and [examples.ncs/service-management/upgrade-service](https://github.com/NSO-developer/nso-examples/tree/6.6/service-management/upgrade-service) example (Java) for details. Note that changing the schema in any way requires you to recompile the `.fxs` files in the package, which is typically done by running `make` in the package's `src` folder. diff --git a/development/introduction-to-automation/basic-automation-with-python.md b/development/introduction-to-automation/basic-automation-with-python.md index 2efeea61..fd42126d 100644 --- a/development/introduction-to-automation/basic-automation-with-python.md +++ b/development/introduction-to-automation/basic-automation-with-python.md @@ -115,7 +115,7 @@ Now let's see how you can use this knowledge for network automation. ## Showcase - Configuring DNS with Python {% hint style="info" %} -See [examples.ncs/getting-started/basic-automation](https://github.com/NSO-developer/nso-examples/blob/6.4/getting-started/basic-automation) for an example implementation. +See [examples.ncs/getting-started/basic-automation](https://github.com/NSO-developer/nso-examples/tree/6.6/getting-started/basic-automation) for an example implementation. {% endhint %} ### **Prerequisites** @@ -124,11 +124,11 @@ See [examples.ncs/getting-started/basic-automation](https://github.com/NSO-devel ### Step 1 - Start the Routers -Leveraging one of the examples included with the NSO installation allows you to quickly gain access to an NSO instance with a few devices already onboarded. The [examples.ncs/device-management](https://github.com/NSO-developer/nso-examples/tree/6.5/device-management) set of examples contains three simulated routers that you can configure. +Leveraging one of the examples included with the NSO installation allows you to quickly gain access to an NSO instance with a few devices already onboarded. The [examples.ncs/device-management](https://github.com/NSO-developer/nso-examples/tree/6.6/device-management) set of examples contains three simulated routers that you can configure.

The Lab Topology

-1. Navigate to the [router-network](https://github.com/NSO-developer/nso-examples/tree/6.5/device-management/router-network) directory with the following command. +1. Navigate to the [router-network](https://github.com/NSO-developer/nso-examples/tree/6.6/device-management/router-network) directory with the following command. ```bash $ cd $NCS_DIR/examples.ncs/device-management/router-network diff --git a/development/introduction-to-automation/cdb-and-yang.md b/development/introduction-to-automation/cdb-and-yang.md index 80962109..e0a8c1c3 100644 --- a/development/introduction-to-automation/cdb-and-yang.md +++ b/development/introduction-to-automation/cdb-and-yang.md @@ -49,7 +49,7 @@ However, the CDB can't use the YANG files directly. The bundled compiler, `ncsc` ## Showcase: Extending the CDB with Packages {% hint style="info" %} -See [examples.ncs/getting-started/cdb-yang](https://github.com/NSO-developer/nso-examples/blob/6.4/getting-started/cdb-yang) for an example implementation. +See [examples.ncs/getting-started/cdb-yang](https://github.com/NSO-developer/nso-examples/tree/6.6/getting-started/cdb-yang) for an example implementation. {% endhint %} ### Prerequisites @@ -271,7 +271,7 @@ Combining just these four fundamental YANG node types, you can build a very comp ## Showcase: Building and Testing a Model {% hint style="info" %} -See [examples.ncs/getting-started/cdb-yang](https://github.com/NSO-developer/nso-examples/blob/6.4/getting-started/cdb-yang) for an example implementation. +See [examples.ncs/getting-started/cdb-yang](https://github.com/NSO-developer/nso-examples/tree/6.6/getting-started/cdb-yang) for an example implementation. {% endhint %} ### Prerequisites diff --git a/development/introduction-to-automation/develop-a-simple-service.md b/development/introduction-to-automation/develop-a-simple-service.md index a011f81f..bb6eef2a 100644 --- a/development/introduction-to-automation/develop-a-simple-service.md +++ b/development/introduction-to-automation/develop-a-simple-service.md @@ -140,7 +140,7 @@ Finally, your Python script can read the supplied values inside the `cb_create() ## Showcase - A Simple DNS Configuration Service {% hint style="info" %} -See [examples.ncs/getting-started/develop-service](https://github.com/NSO-developer/nso-examples/tree/6.5/getting-started/develop-service) for an example implementation. +See [examples.ncs/getting-started/develop-service](https://github.com/NSO-developer/nso-examples/tree/6.6/getting-started/develop-service) for an example implementation. {% endhint %} ### Prerequisites @@ -151,7 +151,7 @@ See [examples.ncs/getting-started/develop-service](https://github.com/NSO-develo ### Step 1 - Prepare Simulated Routers -The [examples.ncs/getting-started/develop-service/init](https://github.com/NSO-developer/nso-examples/tree/6.5/getting-started/develop-service/init) holds a package, Makefile, and an XML initialization file you can use for this scenario to start the routers and connect them to your NSO instance. +The [examples.ncs/getting-started/develop-service/init](https://github.com/NSO-developer/nso-examples/tree/6.6/getting-started/develop-service/init) holds a package, Makefile, and an XML initialization file you can use for this scenario to start the routers and connect them to your NSO instance. First, copy the package and files to your `NSO_RUNDIR`: @@ -456,7 +456,7 @@ Likewise, you can use the same XPath in a template of a Python service. Then you ## Showcase - DNS Configuration Service with Templates {% hint style="info" %} -See [examples.ncs/getting-started/develop-service](https://github.com/NSO-developer/nso-examples/tree/6.5/getting-started/develop-service) for an example implementation. +See [examples.ncs/getting-started/develop-service](https://github.com/NSO-developer/nso-examples/tree/6.6/getting-started/develop-service) for an example implementation. {% endhint %} ### Prerequisites @@ -467,7 +467,7 @@ See [examples.ncs/getting-started/develop-service](https://github.com/NSO-develo ### Step 1 - Prepare Simulated Routers -The [examples.ncs/getting-started/develop-service/init](https://github.com/NSO-developer/nso-examples/tree/6.5/getting-started/develop-service/init) holds a package, Makefile, and an XML initialization file you can use for this scenario to start the routers and connect them to your NSO instance. +The [examples.ncs/getting-started/develop-service/init](https://github.com/NSO-developer/nso-examples/tree/6.6/getting-started/develop-service/init) holds a package, Makefile, and an XML initialization file you can use for this scenario to start the routers and connect them to your NSO instance. First, copy the package and files to your `NSO_RUNDIR`: diff --git a/images/ai-assistant.png b/images/ai-assistant.png new file mode 100644 index 00000000..39c2b6ae Binary files /dev/null and b/images/ai-assistant.png differ diff --git a/images/compliance-reports-results.png b/images/compliance-reports-results.png index 649fb23b..27f453bc 100644 Binary files a/images/compliance-reports-results.png and b/images/compliance-reports-results.png differ diff --git a/images/compliance-reports.png b/images/compliance-reports.png index cb2dd651..7873336e 100644 Binary files a/images/compliance-reports.png and b/images/compliance-reports.png differ diff --git a/images/compliance-templates.png b/images/compliance-templates.png new file mode 100644 index 00000000..bcb3c1e6 Binary files /dev/null and b/images/compliance-templates.png differ diff --git a/images/ha-raft.png b/images/ha-raft.png new file mode 100644 index 00000000..3419a541 Binary files /dev/null and b/images/ha-raft.png differ diff --git a/images/ha-rule.png b/images/ha-rule.png new file mode 100644 index 00000000..5c328933 Binary files /dev/null and b/images/ha-rule.png differ diff --git a/images/nsowebui.png b/images/nsowebui.png index 18e4e075..37438149 100644 Binary files a/images/nsowebui.png and b/images/nsowebui.png differ diff --git a/images/packages.png b/images/packages.png index 866f7f3a..d38ca433 100644 Binary files a/images/packages.png and b/images/packages.png differ diff --git a/images/tools-view.png b/images/tools-view.png index d5af8840..e029f8b3 100644 Binary files a/images/tools-view.png and b/images/tools-view.png differ diff --git a/operation-and-usage/operations/alarm-manager.md b/operation-and-usage/operations/alarm-manager.md index 889abdc6..b7c14821 100644 --- a/operation-and-usage/operations/alarm-manager.md +++ b/operation-and-usage/operations/alarm-manager.md @@ -316,7 +316,7 @@ The following typedef defines the different states an alarm can be set into. It is of course also possible to manipulate the alarm handling list from either Java code or Javascript code running in the web browser using the `js_maapi` library. -Below is a simple scenario to illustrate the alarm concepts. The example can be found in [examples.ncs/service-management/mpls-vpn-simple](https://github.com/NSO-developer/nso-examples/tree/6.5/service-management/mpls-vpn-simple). +Below is a simple scenario to illustrate the alarm concepts. The example can be found in [examples.ncs/service-management/mpls-vpn-simple](https://github.com/NSO-developer/nso-examples/tree/6.6/service-management/mpls-vpn-simple). ```bash $ make stop clean all start diff --git a/operation-and-usage/operations/basic-operations.md b/operation-and-usage/operations/basic-operations.md index 790a1376..a388717c 100644 --- a/operation-and-usage/operations/basic-operations.md +++ b/operation-and-usage/operations/basic-operations.md @@ -21,7 +21,7 @@ Note that both the NSO software (NCS) and the simulated network devices run on y To start the simulator: -1. Go to [examples.ncs/device-management/simulated-cisco-ios](https://github.com/NSO-developer/nso-examples/tree/6.5/device-management/simulated-cisco-ios). First of all, we will generate a network simulator with three Cisco devices. They will be called `c0`, `c1`, and `c2`. +1. Go to [examples.ncs/device-management/simulated-cisco-ios](https://github.com/NSO-developer/nso-examples/tree/6.6/device-management/simulated-cisco-ios). First of all, we will generate a network simulator with three Cisco devices. They will be called `c0`, `c1`, and `c2`. {% hint style="info" %} Most of this section follows the procedure in the `README` file, so it is useful to have it opened as well. @@ -68,7 +68,7 @@ This shows that the device has some initial configurations. The previous step started the simulated Cisco devices. It is now time to start NSO. -1. The first action is to prepare directories needed for NSO to run and populate NSO with information on the simulated devices. This is all done with the `ncs-setup` command. Make sure that you are in the [examples.ncs/device-management/simulated-cisco-ios](https://github.com/NSO-developer/nso-examples/tree/6.5/device-management/simulated-cisco-ios) directory. (Again, ignore the details for the time being). +1. The first action is to prepare directories needed for NSO to run and populate NSO with information on the simulated devices. This is all done with the `ncs-setup` command. Make sure that you are in the [examples.ncs/device-management/simulated-cisco-ios](https://github.com/NSO-developer/nso-examples/tree/6.6/device-management/simulated-cisco-ios) directory. (Again, ignore the details for the time being). ```bash $ ncs-setup --netsim-dir ./netsim --dest . diff --git a/operation-and-usage/operations/compliance-reporting.md b/operation-and-usage/operations/compliance-reporting.md index 49a6f28c..de5f2251 100644 --- a/operation-and-usage/operations/compliance-reporting.md +++ b/operation-and-usage/operations/compliance-reporting.md @@ -23,7 +23,7 @@ Reports can be generated using either the CLI or Web UI. The suggested and favor It is possible to create several named compliance report definitions. Each named report defines the devices, services, and/or templates that should be part of the network configuration verification. -Let us walk through a simple compliance report definition. This example is based on the [examples.ncs/service-management/mpls-vpn-java](https://github.com/NSO-developer/nso-examples/tree/6.5/service-management/mpls-vpn-java) example. For the details of the included services and devices in this example, see the `README` file. +Let us walk through a simple compliance report definition. This example is based on the [examples.ncs/service-management/mpls-vpn-java](https://github.com/NSO-developer/nso-examples/tree/6.6/service-management/mpls-vpn-java) example. For the details of the included services and devices in this example, see the `README` file. Each report definition has a name and can specify device and service checks. Device checks are further classified into sync and configuration checks. Device sync checks verify the in-sync status of the devices included in the report, while device configuration checks verify individual device configuration against a compliance template (see [Device Configuration Checks](compliance-reporting.md#device-configuration-checks)). diff --git a/operation-and-usage/operations/managing-network-services.md b/operation-and-usage/operations/managing-network-services.md index 7ed70a39..112d2432 100644 --- a/operation-and-usage/operations/managing-network-services.md +++ b/operation-and-usage/operations/managing-network-services.md @@ -27,7 +27,7 @@ An example is the best method to illustrate how services are created and used in Watch a video presentation of this demo on [YouTube](https://www.youtube.com/watch?v=sYuETSuTsrM). {% endhint %} -The example [examples.ncs/service-management/mpls-vpn-java](https://github.com/NSO-developer/nso-examples/tree/6.5/service-management/mpls-vpn-java) will be used to explain NSO Service Management features. This example illustrates Layer-3 VPNs in a service provider MPLS network. The example network consists of Cisco ASR 9k and Juniper core routers (P and PE) and Cisco IOS-based CE routers. The Layer-3 VPN service configures the CE/PE routers for all endpoints in the VPN with BGP as the CE/PE routing protocol. The layer-2 connectivity between CE and PE routers is expected to be done through a Layer-2 ethernet access network, which is out of scope for this example. The Layer-3 VPN service includes VPN connectivity as well as bandwidth and QOS parameters. +The example [examples.ncs/service-management/mpls-vpn-java](https://github.com/NSO-developer/nso-examples/tree/6.6/service-management/mpls-vpn-java) will be used to explain NSO Service Management features. This example illustrates Layer-3 VPNs in a service provider MPLS network. The example network consists of Cisco ASR 9k and Juniper core routers (P and PE) and Cisco IOS-based CE routers. The Layer-3 VPN service configures the CE/PE routers for all endpoints in the VPN with BGP as the CE/PE routing protocol. The layer-2 connectivity between CE and PE routers is expected to be done through a Layer-2 ethernet access network, which is out of scope for this example. The Layer-3 VPN service includes VPN connectivity as well as bandwidth and QOS parameters.

A L3 VPN Example

@@ -635,7 +635,7 @@ To have NSO deploy services across devices, two pieces are needed: ### Defining the Service Model -The first step is to generate a skeleton package for a service (for details, see [Packages](../../administration/management/package-mgmt.md)). Create a directory under, for example, `~/my-sim-ios`similar to how it is done for the [examples.ncs/device-management/simulated-cisco-ios](https://github.com/NSO-developer/nso-examples/tree/6.5/device-management/simulated-cisco-ios) example. Make sure that you have stopped any running NSO and netsim. +The first step is to generate a skeleton package for a service (for details, see [Packages](../../administration/management/package-mgmt.md)). Create a directory under, for example, `~/my-sim-ios`similar to how it is done for the [examples.ncs/device-management/simulated-cisco-ios](https://github.com/NSO-developer/nso-examples/tree/6.6/device-management/simulated-cisco-ios) example. Make sure that you have stopped any running NSO and netsim. Navigate to the simulated ios directory and create a new package for the VLAN service model: @@ -750,7 +750,7 @@ This simple VLAN service model says: The good thing with NSO is that already at this point you could load the service model to NSO and try if it works well in the CLI etc. Nothing would happen to the devices since we have not defined the mapping, but this is normally the way to iterate a model and test the CLI towards the network engineers. -To build this service model `cd` to the [examples.ncs/device-management/simulated-cisco-ios](https://github.com/NSO-developer/nso-examples/tree/6.5/device-management/simulated-cisco-ios) example `/packages/vlan/src` directory and type `make` (assuming you have the `make` build system installed). +To build this service model `cd` to the [examples.ncs/device-management/simulated-cisco-ios](https://github.com/NSO-developer/nso-examples/tree/6.6/device-management/simulated-cisco-ios) example `/packages/vlan/src` directory and type `make` (assuming you have the `make` build system installed). ```bash $ make @@ -1162,7 +1162,7 @@ A limitation in the scenarios described so far is that the mapping definition co Nano services using Reactive FASTMAP handle these scenarios with an executable plan that the system can follow to provision the service. The general idea is to implement the service as several smaller (nano) steps or stages, by using reactive FASTMAP and provide a framework to safely execute actions with side effects. -The [examples.ncs/getting-started/netsim-sshkey](https://github.com/NSO-developer/nso-examples/tree/6.5/getting-started/netsim-sshkey) example implements key generation to files and service deployment of the key to set up network elements and NSO for public key authentication to illustrate this concept. The example is described in more detail in [Develop and Deploy a Nano Service](../../administration/installation-and-deployment/deployment/develop-and-deploy-a-nano-service.md). +The [examples.ncs/getting-started/netsim-sshkey](https://github.com/NSO-developer/nso-examples/tree/6.6/getting-started/netsim-sshkey) example implements key generation to files and service deployment of the key to set up network elements and NSO for public key authentication to illustrate this concept. The example is described in more detail in [Develop and Deploy a Nano Service](../../administration/installation-and-deployment/deployment/develop-and-deploy-a-nano-service.md). ## Reconciling Existing Services diff --git a/operation-and-usage/operations/neds-and-adding-devices.md b/operation-and-usage/operations/neds-and-adding-devices.md index 5b601103..86f7fc3e 100644 --- a/operation-and-usage/operations/neds-and-adding-devices.md +++ b/operation-and-usage/operations/neds-and-adding-devices.md @@ -106,7 +106,7 @@ All devices have a `admin-state` with default value `southbound-locked`. This me ### CLI NEDs -(See also [examples.ncs/device-management/real-device-cisco-ios](https://github.com/NSO-developer/nso-examples/tree/6.5/device-management/real-device-cisco-ios)). Straightforward, adding a new device on a specific address, standard SSH port: +(See also [examples.ncs/device-management/real-device-cisco-ios](https://github.com/NSO-developer/nso-examples/tree/6.6/device-management/real-device-cisco-ios)). Straightforward, adding a new device on a specific address, standard SSH port: ```cli admin@ncs(config)# devices device c7 address 1.2.3.4 port 22 \ @@ -121,7 +121,7 @@ admin@ncs(config-device-c7)# commit ### NETCONF NEDs, JunOS -See also [examples.ncs/device-management/real-device-juniper](https://github.com/NSO-developer/nso-examples/tree/6.5/device-management/real-device-juniper). Make sure that NETCONF over SSH is enabled on the JunOS device: +See also [examples.ncs/device-management/real-device-juniper](https://github.com/NSO-developer/nso-examples/tree/6.6/device-management/real-device-juniper). Make sure that NETCONF over SSH is enabled on the JunOS device: ``` junos1% show system services @@ -146,7 +146,7 @@ admin@ncs(config-device-junos1)# commit ### SNMP NEDs -(See also [examples.ncs/device-management/snmp-ned](https://github.com/NSO-developer/nso-examples/tree/6.5/device-management/snmp-ned).) First of all, let's explain SNMP NEDs a bit. By default all read-only objects are mapped to operational data in NSO and read-write objects are mapped to configuration data. This means that a sync-from operation will load read-write objects into NSO. How can you reach read-only objects? Note the following is true for all NED types that have modeled operational data. The device configuration exists at `devices device config` and has a copy in CDB. NSO can speak live to the device to fetch for example counters by using the path `devices device live-status`: +(See also [examples.ncs/device-management/snmp-ned](https://github.com/NSO-developer/nso-examples/tree/6.6/device-management/snmp-ned).) First of all, let's explain SNMP NEDs a bit. By default all read-only objects are mapped to operational data in NSO and read-write objects are mapped to configuration data. This means that a sync-from operation will load read-write objects into NSO. How can you reach read-only objects? Note the following is true for all NED types that have modeled operational data. The device configuration exists at `devices device config` and has a copy in CDB. NSO can speak live to the device to fetch for example counters by using the path `devices device live-status`: ```cli admin@ncs# show devices device r1 live-status SNMPv2-MIB diff --git a/operation-and-usage/operations/network-simulator-netsim.md b/operation-and-usage/operations/network-simulator-netsim.md index 2bcda883..765731ae 100644 --- a/operation-and-usage/operations/network-simulator-netsim.md +++ b/operation-and-usage/operations/network-simulator-netsim.md @@ -36,7 +36,7 @@ Usage ncs-netsim [--dir ] [-w | --window] [cli | cli-c | cli-i] devname ``` -Assume that you have prepared an NSO package for a device called `router`. (See the [examples.ncs/device-management/router-network](https://github.com/NSO-developer/nso-examples/tree/6.5/device-management/router-network) example). Also, assume the package is in `./packages/router`. At this point, you can create the simulated network by: +Assume that you have prepared an NSO package for a device called `router`. (See the [examples.ncs/device-management/router-network](https://github.com/NSO-developer/nso-examples/tree/6.6/device-management/router-network) example). Also, assume the package is in `./packages/router`. At this point, you can create the simulated network by: ```bash $ ncs-netsim create-network ./packages/router 3 device --dir ./netsim @@ -158,4 +158,4 @@ $ NCS_IPC_PORT=5010 ncs_load -m -l *.xml ### Learn More -The README file in [examples.ncs/device-management/router-network](https://github.com/NSO-developer/nso-examples/tree/6.5/device-management/router-network) example gives a good introduction on how to use `ncs-netsim`. +The README file in [examples.ncs/device-management/router-network](https://github.com/NSO-developer/nso-examples/tree/6.6/device-management/router-network) example gives a good introduction on how to use `ncs-netsim`. diff --git a/operation-and-usage/operations/nso-device-manager.md b/operation-and-usage/operations/nso-device-manager.md index 1b4ef007..0c4e880d 100644 --- a/operation-and-usage/operations/nso-device-manager.md +++ b/operation-and-usage/operations/nso-device-manager.md @@ -21,7 +21,7 @@ To understand the main idea behind the NSO device manager it is necessary to und The NEDs will publish YANG data models even for non-NETCONF devices. In the case of SNMP the YANG models are generated from the MIBs. For JunOS devices the JunOS NED generates a YANG from the JunOS XML Schema. For Schema-less devices like CLI devices, the NED developer writes YANG models corresponding to the CLI structure. The result of this is the device manager and NSO CDB has YANG data models for all devices independent of the underlying protocol. -Throughout this section, we will use the [examples.ncs/service-management/mpls-vpn-java](https://github.com/NSO-developer/nso-examples/tree/6.5/service-management/mpls-vpn-java) example. The example network consists of Cisco ASR 9k and Juniper core routers (P and PE) and Cisco IOS-based CE routers. +Throughout this section, we will use the [examples.ncs/service-management/mpls-vpn-java](https://github.com/NSO-developer/nso-examples/tree/6.6/service-management/mpls-vpn-java) example. The example network consists of Cisco ASR 9k and Juniper core routers (P and PE) and Cisco IOS-based CE routers.

NSO Example Network

@@ -317,7 +317,7 @@ NSO provides the ability to synchronize the configuration to or from the device. In the normal case, the configuration on the device and the copy of the configuration inside NSO should be identical. -In a cold start situation like in the [examples.ncs/service-management/mpls-vpn-java](https://github.com/NSO-developer/nso-examples/tree/6.5/service-management/mpls-vpn-java) example, where NSO is empty and there are network devices to talk to, it makes sense to synchronize from the devices. You can choose to synchronize from one device at a time or from all devices at once. Here is a CLI session to illustrate this. +In a cold start situation like in the [examples.ncs/service-management/mpls-vpn-java](https://github.com/NSO-developer/nso-examples/tree/6.6/service-management/mpls-vpn-java) example, where NSO is empty and there are network devices to talk to, it makes sense to synchronize from the devices. You can choose to synchronize from one device at a time or from all devices at once. Here is a CLI session to illustrate this. {% code title="Example: Synchronize From Devices" %} ```cli @@ -507,7 +507,7 @@ This makes it possible to investigate the changes before they are transmitted to ### Partial `sync-from` -It is possible to synchronize a part of the configuration (a certain subtree) from the device using the `partial-sync-from` action located under /devices. While it is primarily intended to be used by service developers as described in [Partial Sync](../../development/advanced-development/developing-services/services-deep-dive.md#ch_svcref.partialsync), it is also possible to use directly from the NSO CLI (or any other northbound interface). The example below (Example of Running partial-sync-from Action via CLI) illustrates using this action via CLI, using a router device from the [examples.ncs/device-management/router-network](https://github.com/NSO-developer/nso-examples/tree/6.5/device-management/router-network) example. +It is possible to synchronize a part of the configuration (a certain subtree) from the device using the `partial-sync-from` action located under /devices. While it is primarily intended to be used by service developers as described in [Partial Sync](../../development/advanced-development/developing-services/services-deep-dive.md#ch_svcref.partialsync), it is also possible to use directly from the NSO CLI (or any other northbound interface). The example below (Example of Running partial-sync-from Action via CLI) illustrates using this action via CLI, using a router device from the [examples.ncs/device-management/router-network](https://github.com/NSO-developer/nso-examples/tree/6.6/device-management/router-network) example. {% code title="Example: Example of Running partial-sync-from Action via CLI" %} ```bash @@ -1865,7 +1865,7 @@ This section shows how device templates can be used to create and change device Device templates are part of the NSO configuration. Device templates are created and changed in the tree `/devices/template/config` the same way as any other configuration data and are affected by rollbacks and upgrades. Device templates can only manipulate configuration data in the `/devices/device/config` tree i.e., only device data. -The [examples.ncs/service-management/mpls-vpn-java](https://github.com/NSO-developer/nso-examples/tree/6.5/service-management/mpls-vpn-java) example comes with a pre-populated template for SNMP settings. +The [examples.ncs/service-management/mpls-vpn-java](https://github.com/NSO-developer/nso-examples/tree/6.6/service-management/mpls-vpn-java) example comes with a pre-populated template for SNMP settings. ```cli ncs(config)# show full-configuration devices template @@ -2577,7 +2577,7 @@ ncs(config)# devices device pe2 rpc \ rpc-get-software-information get-software-information brief ``` -In the simulated environment of the [examples.ncs/service-management/mpls-vpn-java](https://github.com/NSO-developer/nso-examples/tree/6.5/service-management/mpls-vpn-java) example, these RPCs might not have been implemented. +In the simulated environment of the [examples.ncs/service-management/mpls-vpn-java](https://github.com/NSO-developer/nso-examples/tree/6.6/service-management/mpls-vpn-java) example, these RPCs might not have been implemented. ## Device Groups @@ -3166,7 +3166,7 @@ NETCONF Call Home is enabled and configured under `/ncs-config/netconf-call-home A device can be connected through the NETCONF Call Home client only if `/devices/device/state/admin-state` is set to `call-home`. This state prevents any southbound communication to the device unless the connection has already been established through the NETCONF Call Home client protocol. -See [examples.ncs/northbound-interfaces/netconf-call-home](https://github.com/NSO-developer/nso-examples/tree/6.5/northbound-interfaces/netconf-call-home) for an example. +See [examples.ncs/northbound-interfaces/netconf-call-home](https://github.com/NSO-developer/nso-examples/tree/6.6/northbound-interfaces/netconf-call-home) for an example. ## Notifications @@ -3192,7 +3192,7 @@ Notifications must be defined at the top level of a YANG module. NSO does curren ### An Example Session -In this section, we will use the [examples.ncs/device-management/web-server-basic](https://github.com/NSO-developer/nso-examples/tree/6.5/device-management/web-server-basic) example. +In this section, we will use the [examples.ncs/device-management/web-server-basic](https://github.com/NSO-developer/nso-examples/tree/6.6/device-management/web-server-basic) example. Let's dive into an example session with the NSO CLI. In the NSO example collection, the webserver publishes two NETCONF notification structures, indicating what they intend to send to any interested listeners. They all have the YANG module: diff --git a/operation-and-usage/operations/out-of-band-interoperation.md b/operation-and-usage/operations/out-of-band-interoperation.md index 1f7e9f37..2462c968 100644 --- a/operation-and-usage/operations/out-of-band-interoperation.md +++ b/operation-and-usage/operations/out-of-band-interoperation.md @@ -189,7 +189,7 @@ Specifying `manage-by-service` not only updates device configuration in the CDB ### Rule Behavior Example -Consider a setup from [examples.ncs/service-management/confirm-network-state](https://github.com/NSO-developer/nso-examples/tree/6.5/service-management/confirm-network-state), started by `make demo`, with the following out-of-band policy: +Consider a setup from [examples.ncs/service-management/confirm-network-state](https://github.com/NSO-developer/nso-examples/tree/6.6/service-management/confirm-network-state), started by `make demo`, with the following out-of-band policy: ``` services out-of-band policy iface-servicepoint diff --git a/operation-and-usage/operations/plug-and-play-scripting.md b/operation-and-usage/operations/plug-and-play-scripting.md index a2f56238..7a141a33 100644 --- a/operation-and-usage/operations/plug-and-play-scripting.md +++ b/operation-and-usage/operations/plug-and-play-scripting.md @@ -57,7 +57,7 @@ Yet another parameter may be useful when debugging the reload of scripts: * `debug`: Shows additional debug info about the scripts. -An example session reloading scripts using the [examples.ncs/sdk-api/scripting](https://github.com/NSO-developer/nso-examples/tree/6.5/sdk-api/scripting) example: +An example session reloading scripts using the [examples.ncs/sdk-api/scripting](https://github.com/NSO-developer/nso-examples/tree/6.6/sdk-api/scripting) example: ```cli admin@ncs# script reload all @@ -272,7 +272,7 @@ done ncs-maapi --set "/nacm/groups/group{${group}}/user-name" "${gusers} ${user}" ``` -Running the [examples.ncs/sdk-api/scripting](https://github.com/NSO-developer/nso-examples/tree/6.5/sdk-api/scripting) `/scripts/command/echo.sh` script with the argument `--command` argument produces a `command` section and a couple of `param` sections: +Running the [examples.ncs/sdk-api/scripting](https://github.com/NSO-developer/nso-examples/tree/6.6/sdk-api/scripting) `/scripts/command/echo.sh` script with the argument `--command` argument produces a `command` section and a couple of `param` sections: ```bash $ ./echo.sh --command @@ -305,7 +305,7 @@ begin param end ``` -In the complete example, [examples.ncs/sdk-api/scripting](https://github.com/NSO-developer/nso-examples/tree/6.5/sdk-api/scripting), there is a `README` file and a simple command script `scripts/command/echo.sh`. +In the complete example, [examples.ncs/sdk-api/scripting](https://github.com/NSO-developer/nso-examples/tree/6.6/sdk-api/scripting), there is a `README` file and a simple command script `scripts/command/echo.sh`. ## Policy Scripts @@ -449,7 +449,7 @@ Aborted: /devices/global-settings/trace-dir: must retain it original value (./logs) ``` -In the complete example, [examples.ncs/sdk-api/scripting](https://github.com/NSO-developer/nso-examples/tree/6.5/sdk-api/scripting) there is a `README` file and a simple policy script `scripts/policy/check_dir.sh`. +In the complete example, [examples.ncs/sdk-api/scripting](https://github.com/NSO-developer/nso-examples/tree/6.6/sdk-api/scripting) there is a `README` file and a simple policy script `scripts/policy/check_dir.sh`. ## Post-commit Scripts @@ -536,4 +536,4 @@ AutoGenerated mail from NCS value set : /devices/global-settings/trace-dir ``` -In the complete example, [examples.ncs/sdk-api/scripting](https://github.com/NSO-developer/nso-examples/tree/6.5/sdk-api/scripting) , there is a `README` file and a simple post-commit script `scripts/post-commit/show_diff.sh`. +In the complete example, [examples.ncs/sdk-api/scripting](https://github.com/NSO-developer/nso-examples/tree/6.6/sdk-api/scripting) , there is a `README` file and a simple post-commit script `scripts/post-commit/show_diff.sh`. diff --git a/operation-and-usage/operations/ssh-key-management.md b/operation-and-usage/operations/ssh-key-management.md index c4fe6196..e2f3e8e6 100644 --- a/operation-and-usage/operations/ssh-key-management.md +++ b/operation-and-usage/operations/ssh-key-management.md @@ -27,7 +27,7 @@ The public keys that are authorized for authentication of a given user must be p ## NSO as SSH Client -NSO can act as an SSH client for connections to managed devices that use SSH (this is always the case for devices accessed via NETCONF, typically also for devices accessed via CLI), and for connections to other nodes in an NSO cluster. In all cases, a built-in SSH client is used. The [examples.ncs/aaa/ssh-keys](https://github.com/NSO-developer/nso-examples/tree/6.5/aaa/ssh-keys) example in the NSO example collection has a detailed walk-through of the NSO functionality that is described in this section. +NSO can act as an SSH client for connections to managed devices that use SSH (this is always the case for devices accessed via NETCONF, typically also for devices accessed via CLI), and for connections to other nodes in an NSO cluster. In all cases, a built-in SSH client is used. The [examples.ncs/aaa/ssh-keys](https://github.com/NSO-developer/nso-examples/tree/6.6/aaa/ssh-keys) example in the NSO example collection has a detailed walk-through of the NSO functionality that is described in this section. ### Host Key Verification diff --git a/operation-and-usage/webui/README.md b/operation-and-usage/webui/README.md index 60a8e68a..78aab27b 100644 --- a/operation-and-usage/webui/README.md +++ b/operation-and-usage/webui/README.md @@ -9,7 +9,7 @@ The NSO Web UI provides an intuitive northbound interface to your NSO deployment The main components of the Web UI are shown in the figure below. -

NSO Web UI Overview

+

NSO Web UI Overview

The UI works by auto-rendering the underlying device and service models. This gives the benefit that the Web UI is immediately updated when new devices or services are added to the system. For example, say you have added support for a new device vendor. Then, without any programming requirements, the NSO Web UI provides the capability to configure those devices. @@ -63,7 +63,7 @@ The Commit Manager is accessible at all times from the UI header. A number, corr ## AI Assistant -The WebUI integrates an AI Assistant to enhance your interaction and experience of NSO. The availability of the AI Assistant is controlled by your administrator and indicated by the AI Assistant icon () displayed in the UI header. +The WebUI integrates an AI Assistant to enhance your interaction and experience of NSO. The availability of the AI Assistant is controlled by your administrator and indicated by the AI Assistant icon () displayed in the UI header. {% hint style="info" %} #### Administrative Info on Enabling the AI Assistant diff --git a/operation-and-usage/webui/tools.md b/operation-and-usage/webui/tools.md index cf18074c..d26e7c1d 100644 --- a/operation-and-usage/webui/tools.md +++ b/operation-and-usage/webui/tools.md @@ -6,7 +6,7 @@ description: Tools to view NSO status and perform specialized tasks. The **Tools** view includes utilities that you can use to run specific tasks on your deployment. -

Tools View

+

Tools View

The following tools are available: @@ -31,7 +31,7 @@ The **Insights** view collects and displays the following types of operational i In the **Packages** view, you can upload, install, and view the operational state of custom packages in NSO. -

Packages View

+

Packages View

### Add a Package @@ -89,7 +89,7 @@ Available Rule-based HA actions are described further under [Actions](../../admi An example cluster of a Rule-based HA setup is shown below. -

High Availability View (Rule-based)

+

High Availability View (Rule-based)

### Raft HA @@ -97,7 +97,7 @@ The Raft HA view displays overview of your cluster and provides options to manag Available Raft HA actions are described further under [Actions](../../administration/management/high-availability.md#ch_ha.raft_actions), and can be run directly in the Web UI. Specific parameters and field definitions shown in the view are covered in detail in the rest of the [HA documentation](../../administration/management/high-availability.md). -

High Availability View (Raft)

+

High Availability View (Raft)

#### Handover Cluster Leadership @@ -111,7 +111,7 @@ Perform the handover as follows: #### Actions on a Node -Actions on a node, such as **Add node**, **Remove node**, **Disconnect**, etc., are available by accessing the more options button on a node. Most of the actions in Raft HA can only be executed from the leader node. +Actions on a node, such as **Add node**, **Remove node**, **Disconnect**, etc., are available by accessing the more options button on a node. Most of the actions in Raft HA can only be executed from the leader node. #### Logs and Certificates @@ -229,7 +229,7 @@ The following tabs are available in this view: The **Compliance reports** tab is used to view, create, run, and manage the existing compliance reports. -

Compliance Reports View

+

Compliance Reports View

#### **Create a Compliance Report** @@ -276,7 +276,7 @@ To run a compliance report: The **Reports results** tab is used to view the status and results of the compliance reports that have been run. -

Reports Results View

+

Reports Results View

#### View Compliance Report Results @@ -295,7 +295,7 @@ Use the **Export to file** button to export the report results to a downloadable The **Compliance Templates** tab is used to create new compliance templates and manage existing ones. -

Compliance Reporting View

+

Compliance Templates View

There are two ways to create a compliance template: diff --git a/resources/man/clispec.5.md b/resources/man/clispec.5.md index 50196ca3..6ca5a2f3 100644 --- a/resources/man/clispec.5.md +++ b/resources/man/clispec.5.md @@ -13,9 +13,12 @@ operations and customizable confirmation prompts. In Cisco style custom mode-specific commands can be added by specifying a mount point relating to the specified mode. -> [!TIP] -> In the NSO distribution there is an Emacs mode suitable for clispec -> editing. +
+ +In the NSO distribution there is an Emacs mode suitable for clispec +editing. + +
A clispec file (with a .cli suffix) is to be compiled using the `ncsc` compiler into an internal representation (with a .ccl suffix), ready to diff --git a/resources/man/confd_lib_cdb.3.md b/resources/man/confd_lib_cdb.3.md index 6740fbf0..172e774a 100644 --- a/resources/man/confd_lib_cdb.3.md +++ b/resources/man/confd_lib_cdb.3.md @@ -580,11 +580,13 @@ A call to `cdb_connect()` is typically followed by a call to either `cdb_start_session()` for a reading session or a call to `cdb_subscribe()` for a subscription socket. -> **Note** -> -> If this call fails (i.e. does not return CONFD_OK), the socket -> descriptor must be closed and a new socket created before the call is -> re-attempted. +
+ +If this call fails (i.e. does not return CONFD_OK), the socket +descriptor must be closed and a new socket created before the call is +re-attempted. + +
*Errors*: CONFD_ERR_MALLOC, CONFD_ERR_OS @@ -600,11 +602,13 @@ names to be used for different connections from the same application process, we can use `cdb_connect_name()` with the wanted name instead of `cdb_connect()`. -> **Note** -> -> If this call fails (i.e. does not return CONFD_OK), the socket -> descriptor must be closed and a new socket created before the call is -> re-attempted. +
+ +If this call fails (i.e. does not return CONFD_OK), the socket +descriptor must be closed and a new socket created before the call is +re-attempted. + +
*Errors*: CONFD_ERR_MALLOC, CONFD_ERR_OS @@ -632,9 +636,11 @@ one `cdb_subscribe2()` call followed by a `cdb_subscribe_done()` call. A call to `cdb_mandatory_subscriber()` is only allowed before the first call of `cdb_subscribe2()`. -> **Note** -> -> Only applicable for two-phase subscribers. +
+ +Only applicable for two-phase subscribers. + +
*Errors*: CONFD_ERR_MALLOC, CONFD_ERR_OS @@ -722,10 +728,13 @@ parameter should be one of: > further details about working with operational data in CDB, see the > `OPERATIONAL DATA` section below. > -> > [!NOTE] -> > Subscriptions on operational data will not be triggered from a -> > session created with this function - to trigger operational data -> > subscriptions, we need to use `cdb_start_session2()`, see below. +>
+> +> Subscriptions on operational data will not be triggered from a session +> created with this function - to trigger operational data +> subscriptions, we need to use `cdb_start_session2()`, see below. +> +>
*Errors*: CONFD_ERR_MALLOC, CONFD_ERR_OS, CONFD_ERR_LOCKED, CONFD_ERR_NOEXISTS @@ -966,12 +975,14 @@ function passed to `cdb_diff_iterate()`), or with a data socket that has an active session. The timeout is given in seconds from the point in time when the function is called. -> **Note** -> -> The timeout for subscription delivery is common for all the -> subscribers receiving notifications at a given priority. Thus calling -> the function during subscription delivery changes the timeout for all -> the subscribers that are currently processing notifications. +
+ +The timeout for subscription delivery is common for all the subscribers +receiving notifications at a given priority. Thus calling the function +during subscription delivery changes the timeout for all the subscribers +that are currently processing notifications. + +
*Errors*: CONFD_ERR_MALLOC, CONFD_ERR_OS, CONFD_ERR_PROTOUSAGE, CONFD_ERR_BADSTATE @@ -1187,14 +1198,16 @@ several differences from the subscriptions for configuration data: - A special synchronization reply must be used when the notifications have been read (see `cdb_sync_subscription_socket()` below). -> **Note** -> -> Operational and configuration subscriptions can be done on the same -> socket, but in that case the notifications may be arbitrarily -> interleaved, including operational notifications arriving between -> different configuration notifications for the same transaction. If -> this is a problem, use separate sockets for operational and -> configuration subscriptions. +
+ +Operational and configuration subscriptions can be done on the same +socket, but in that case the notifications may be arbitrarily +interleaved, including operational notifications arriving between +different configuration notifications for the same transaction. If this +is a problem, use separate sockets for operational and configuration +subscriptions. + +
*Errors*: CONFD_ERR_MALLOC, CONFD_ERR_OS, CONFD_ERR_BADPATH, CONFD_ERR_NOEXISTS @@ -1262,18 +1275,22 @@ because one of the subscribers that received `CDB_SUB_PREPARE` called `cdb_sub_abort_trans()`, but it could also be caused for other reasons, for example another data provider (than CDB) can abort the transaction. -> **Note** -> -> Two phase subscriptions are not supported for NCS. +
+ +Two phase subscriptions are not supported for NCS. + +
+ +
+ +Operational and configuration subscriptions can be done on the same +socket, but in that case the notifications may be arbitrarily +interleaved, including operational notifications arriving between +different configuration notifications for the same transaction. If this +is a problem, use separate sockets for operational and configuration +subscriptions. -> **Note** -> -> Operational and configuration subscriptions can be done on the same -> socket, but in that case the notifications may be arbitrarily -> interleaved, including operational notifications arriving between -> different configuration notifications for the same transaction. If -> this is a problem, use separate sockets for operational and -> configuration subscriptions. +
*Errors*: CONFD_ERR_MALLOC, CONFD_ERR_OS, CONFD_ERR_BADPATH, CONFD_ERR_NOEXISTS @@ -1712,13 +1729,15 @@ possible to iterate over a list, and for each list instance fetch the changes using `cdb_get_modifications_iter()`, and then return `ITER_CONTINUE` to process next instance. -> **Note** -> -> Note: The `CDB_GET_MODS_REVERSE` flag is ignored by -> `cdb_get_modifications_iter()`. It will instead return a "forward" or -> "reverse" list of modifications for a `CDB_SUB_ABORT` notification -> according to whether the `ITER_WANT_REVERSE` flag was included in the -> `flags` parameter of the `cdb_diff_iterate()` call. +
+ +Note: The `CDB_GET_MODS_REVERSE` flag is ignored by +`cdb_get_modifications_iter()`. It will instead return a "forward" or +"reverse" list of modifications for a `CDB_SUB_ABORT` notification +according to whether the `ITER_WANT_REVERSE` flag was included in the +`flags` parameter of the `cdb_diff_iterate()` call. + +
int cdb_get_modifications_cli( int sock, int subid, int flags, char **str); @@ -1855,10 +1874,12 @@ not possible to call this function from the `iter()` function passed to session, use `maapi_get_user_session()` (see [confd_lib_maapi(3)](confd_lib_maapi.3.md)). -> **Note** -> -> Note: When the ConfD High Availability functionality is used, the user -> session information is not available on secondary nodes. +
+ +Note: When the ConfD High Availability functionality is used, the user +session information is not available on secondary nodes. + +
*Errors*: CONFD_ERR_MALLOC, CONFD_ERR_OS, CONFD_ERR_BADSTATE, CONFD_ERR_NOEXISTS @@ -1874,16 +1895,20 @@ configuration data has been received on that socket, before not possible to call this function from the `iter()` function passed to `cdb_diff_iterate()`. -> **Note** -> -> A CDB client is not expected to access the ConfD transaction store -> directly - this function should only be used for logging or debugging -> purposes. +
-> **Note** -> -> When the ConfD High Availability functionality is used, the -> transaction information is not available on secondary nodes. +A CDB client is not expected to access the ConfD transaction store +directly - this function should only be used for logging or debugging +purposes. + +
+ +
+ +When the ConfD High Availability functionality is used, the transaction +information is not available on secondary nodes. + +
*Errors*: CONFD_ERR_MALLOC, CONFD_ERR_OS, CONFD_ERR_BADSTATE, CONFD_ERR_NOEXISTS @@ -2392,17 +2417,19 @@ the `confd_value_t` value element is given as follows: - As a special case, the "instance integer" can be used to select a list entry by using C_CDBBEGIN instead of C_XMLBEGIN (and no key values). -> **Note** -> -> When we use C_PTR, we need to take special care to free any allocated -> memory. When we use C_NOEXISTS and the value is stored in the array, -> we can just use `confd_free_value()` regardless of the type, since the -> `confd_value_t` has the type information. But with C_PTR, only the -> actual value is stored in the pointed-to variable, just as for -> `cdb_get_buf()`, `cdb_get_binary()`, etc, and we need to free the -> memory specifically allocated for the types listed in the description -> of `cdb_get()` above. See the corresponding `cdb_get_xxx()` functions -> for the details of how to do this. +
+ +When we use C_PTR, we need to take special care to free any allocated +memory. When we use C_NOEXISTS and the value is stored in the array, we +can just use `confd_free_value()` regardless of the type, since the +`confd_value_t` has the type information. But with C_PTR, only the +actual value is stored in the pointed-to variable, just as for +`cdb_get_buf()`, `cdb_get_binary()`, etc, and we need to free the memory +specifically allocated for the types listed in the description of +`cdb_get()` above. See the corresponding `cdb_get_xxx()` functions for +the details of how to do this. + +
All elements have the same position in the array after the call, in order to simplify extraction of the values - this means that optional @@ -2594,11 +2621,13 @@ sockets, or to alternate the use of one socket via `cdb_end_session()`. The write functions can never be used in a session for configuration data. -> **Note** -> -> In order to trigger subscriptions on operational data, we must obtain -> a subscription lock via the use of `cdb_start_session2()` instead of -> `cdb_start_session()`, see above. +
+ +In order to trigger subscriptions on operational data, we must obtain a +subscription lock via the use of `cdb_start_session2()` instead of +`cdb_start_session()`, see above. + +
In YANG it is possible to define a list of operational data without any keys. For this type of list, we use a single "pseudo" key which is diff --git a/resources/man/confd_lib_dp.3.md b/resources/man/confd_lib_dp.3.md index 17081be3..5fc1e84b 100644 --- a/resources/man/confd_lib_dp.3.md +++ b/resources/man/confd_lib_dp.3.md @@ -406,13 +406,15 @@ flags are available: > be invoked with the case value given as NULL instead of the default > case. > -> > [!NOTE] -> > A daemon that has the `CONFD_DAEMON_FLAG_NO_DEFAULTS` flag set -> > *must* reply to `get_elem()` and the other callbacks that request -> > leaf values with a value of type C_DEFAULT, rather than the actual -> > default value, when the default value for a leaf is in effect. It -> > *must* also reply to `get_case()` with C_DEFAULT when the default -> > case is in effect. +>
+> +> A daemon that has the `CONFD_DAEMON_FLAG_NO_DEFAULTS` flag set *must* +> reply to `get_elem()` and the other callbacks that request leaf values +> with a value of type C_DEFAULT, rather than the actual default value, +> when the default value for a leaf is in effect. It *must* also reply +> to `get_case()` with C_DEFAULT when the default case is in effect. +> +>
`CONFD_DAEMON_FLAG_PREFER_BULK_GET` > This flag requests that the `get_object()` callback rather than @@ -476,23 +478,27 @@ daemon and ConfD. Returns CONFD_OK when successful or CONFD_ERR on connection error. -> **Note** -> -> All the callbacks that are invoked via these sockets are subject to -> timeouts configured in `confd.conf`, see -> [confd.conf(5)](ncs.conf.5.md). The callbacks invoked via the -> control socket must generate a reply back to ConfD within the time -> configured for /confdConfig/capi/newSessionTimeout, the callbacks -> invoked via a worker socket within the time configured for -> /confdConfig/capi/queryTimeout. If either timeout is exceeded, the -> daemon will be considered dead, and ConfD will disconnect it by -> closing the control and worker sockets. - -> **Note** -> -> If this call fails (i.e. does not return CONFD_OK), the socket -> descriptor must be closed and a new socket created before the call is -> re-attempted. +
+ +All the callbacks that are invoked via these sockets are subject to +timeouts configured in `confd.conf`, see +[confd.conf(5)](ncs.conf.5.md). The callbacks invoked via the control +socket must generate a reply back to ConfD within the time configured +for /confdConfig/capi/newSessionTimeout, the callbacks invoked via a +worker socket within the time configured for +/confdConfig/capi/queryTimeout. If either timeout is exceeded, the +daemon will be considered dead, and ConfD will disconnect it by closing +the control and worker sockets. + +
+ +
+ +If this call fails (i.e. does not return CONFD_OK), the socket +descriptor must be closed and a new socket created before the call is +re-attempted. + +
*Errors*: CONFD_ERR_MALLOC, CONFD_ERR_OS, CONFD_ERR_PROTOUSAGE @@ -1245,28 +1251,31 @@ non-zero, those callbacks must act as if data with the > latter case the application must at a later stage call > `confd_data_reply_next_key()` or `confd_data_reply_next_key_attrs()`. > -> > [!NOTE] -> > For a list that does not specify a non-default sort order by means -> > of an `ordered-by user` or `tailf:sort-order` statement, ConfD -> > assumes that list entries are ordered strictly by increasing key (or -> > secondary index) values. I.e., CDB's sort order. Thus, for correct -> > operation, we must observe this order when returning list entries in -> > a sequence of `get_next()` calls. -> > -> > A special case is the `union` type key. Entries are ordered by -> > increasing key for their type while types are sorted in the order of -> > appearance in 'enum confd_vtype', see -> > [confd_types(3)](confd_types.3.md). There are exceptions to this -> > rule, namely these five types, which are always sorted at the end: -> > `C_BUF`, `C_DURATION`, `C_INT32`, `C_UINT8`, and `C_UINT16`. Among -> > these, `C_BUF` always comes first, and after that comes -> > `C_DURATION`. Then follows the three integer types, `C_INT32`, -> > `C_UINT8` and `C_UINT16`, which are sorted together in natural -> > number order regardless of type. -> > -> > If CDB's sort order cannot be provided to ConfD for configuration -> > data, /confdConfig/sortTransactions should be set to 'false'. See -> > [confd.conf(5)](ncs.conf.5.md). +>
+> +> For a list that does not specify a non-default sort order by means of +> an `ordered-by user` or `tailf:sort-order` statement, ConfD assumes +> that list entries are ordered strictly by increasing key (or secondary +> index) values. I.e., CDB's sort order. Thus, for correct operation, we +> must observe this order when returning list entries in a sequence of +> `get_next()` calls. +> +> A special case is the `union` type key. Entries are ordered by +> increasing key for their type while types are sorted in the order of +> appearance in 'enum confd_vtype', see +> [confd_types(3)](confd_types.3.md). There are exceptions to this +> rule, namely these five types, which are always sorted at the end: +> `C_BUF`, `C_DURATION`, `C_INT32`, `C_UINT8`, and `C_UINT16`. Among +> these, `C_BUF` always comes first, and after that comes `C_DURATION`. +> Then follows the three integer types, `C_INT32`, `C_UINT8` and +> `C_UINT16`, which are sorted together in natural number order +> regardless of type. +> +> If CDB's sort order cannot be provided to ConfD for configuration +> data, /confdConfig/sortTransactions should be set to 'false'. See +> [confd.conf(5)](ncs.conf.5.md). +> +>
`set_elem()` > This callback writes the value of a leaf. Note that an optional leaf @@ -1278,9 +1287,12 @@ non-zero, those callbacks must act as if data with the > The callback must return CONFD_OK on success, CONFD_ERR on error or > CONFD_DELAYED_RESPONSE. > -> > [!NOTE] -> > Type `empty` leafs part of a `union` are set using this function. -> > Type `empty` leafs outside of `union` use `create()` and `exists()`. +>
+> +> Type `empty` leafs part of a `union` are set using this function. Type +> `empty` leafs outside of `union` use `create()` and `exists()`. +> +>
`create()` > This callback creates a new list entry, a `presence` container, a leaf @@ -1433,13 +1445,16 @@ non-zero, those callbacks must act as if data with the > `CONFD_FIND_NEXT`, and the (complete) set of keys from the previous > reply. > -> > [!NOTE] -> > In the case of list traversal by means of a secondary index, the -> > secondary index values must be unique for entry-by-entry traversal -> > with `find_next()`/`find_next_object()` to be possible. Thus we can -> > not pass `-1` for the `next` parameter to -> > `confd_data_reply_next_key()` or `confd_data_reply_next_key_attrs()` -> > in this case if the secondary index values are not unique. +>
+> +> In the case of list traversal by means of a secondary index, the +> secondary index values must be unique for entry-by-entry traversal +> with `find_next()`/`find_next_object()` to be possible. Thus we can +> not pass `-1` for the `next` parameter to +> `confd_data_reply_next_key()` or `confd_data_reply_next_key_attrs()` +> in this case if the secondary index values are not unique. +> +>
> > To signal that no entry matching the request exists, i.e. we have > reached the end of the list while evaluating the request, we reply @@ -1452,28 +1467,31 @@ non-zero, those callbacks must act as if data with the > element is requested, and then this value is kept as the list is being > traversed. If a new traversal is started, a new unique value is set. > -> > [!NOTE] -> > For a list that does not specify a non-default sort order by means -> > of an `ordered-by user` or `tailf:sort-order` statement, ConfD -> > assumes that list entries are ordered strictly by increasing key (or -> > secondary index) values. I.e., CDB's sort order. Thus, for correct -> > operation, we must observe this order when returning list entries in -> > a sequence of `get_next()` calls. -> > -> > A special case is the union type key. Entries are ordered by -> > increasing key for their type while types are sorted in the order of -> > appearance in 'enum confd_vtype', see -> > [confd_types(3)](confd_types.3.md). There are exceptions to this -> > rule, namely these five types, which are always sorted at the end: -> > `C_BUF`, `C_DURATION`, `C_INT32`, `C_UINT8`, and `C_UINT16`. Among -> > these, `C_BUF` always comes first, and after that comes -> > `C_DURATION`. Then follows the three integer types, `C_INT32`, -> > `C_UINT8` and `C_UINT16`, which are sorted together in natural -> > number order regardless of type. -> > -> > If CDB's sort order cannot be provided to ConfD for configuration -> > data, /confdConfig/sortTransactions should be set to 'false'. See -> > [confd.conf(5)](ncs.conf.5.md). +>
+> +> For a list that does not specify a non-default sort order by means of +> an `ordered-by user` or `tailf:sort-order` statement, ConfD assumes +> that list entries are ordered strictly by increasing key (or secondary +> index) values. I.e., CDB's sort order. Thus, for correct operation, we +> must observe this order when returning list entries in a sequence of +> `get_next()` calls. +> +> A special case is the union type key. Entries are ordered by +> increasing key for their type while types are sorted in the order of +> appearance in 'enum confd_vtype', see +> [confd_types(3)](confd_types.3.md). There are exceptions to this +> rule, namely these five types, which are always sorted at the end: +> `C_BUF`, `C_DURATION`, `C_INT32`, `C_UINT8`, and `C_UINT16`. Among +> these, `C_BUF` always comes first, and after that comes `C_DURATION`. +> Then follows the three integer types, `C_INT32`, `C_UINT8` and +> `C_UINT16`, which are sorted together in natural number order +> regardless of type. +> +> If CDB's sort order cannot be provided to ConfD for configuration +> data, /confdConfig/sortTransactions should be set to 'false'. See +> [confd.conf(5)](ncs.conf.5.md). +> +>
> > If we have registered `find_next()` (or `find_next_object()`), it is > not strictly necessary to also register `get_next()` (or @@ -1765,12 +1783,15 @@ non-zero, those callbacks must act as if data with the > should reply by calling `confd_data_reply_not_found()`, otherwise it > should call `confd_data_reply_attrs()`, even if no attributes are set. > -> > [!NOTE] -> > It is very important to observe this distinction, i.e. to use -> > `confd_data_reply_not_found()` when the node doesn't exist, since -> > ConfD may use `get_attrs()` as an existence check when attributes -> > are enabled. (This avoids doing one callback request for existence -> > check and another to collect the attributes.) +>
+> +> It is very important to observe this distinction, i.e. to use +> `confd_data_reply_not_found()` when the node doesn't exist, since +> ConfD may use `get_attrs()` as an existence check when attributes are +> enabled. (This avoids doing one callback request for existence check +> and another to collect the attributes.) +> +>
> > Must return CONFD_OK on success, CONFD_ERR on error, or > CONFD_DELAYED_RESPONSE. @@ -1914,22 +1935,26 @@ would just reply with `confd_data_reply_not_found()` for all requests for specific data, and `confd_data_reply_next_key()` with NULL for the key values for all `get_next()` etc requests. -> **Note** -> -> For a given callpoint name, there can only be either one non-range -> registration or a number of range registrations that all pertain to -> the same list. If a range registration is done after a non-range -> registration or vice versa, or if a range registration is done with a -> different list path than earlier range registrations, the latest -> registration completely replaces the earlier one(s). If we want to -> register for the same ranges in different lists, we must thus have a -> unique callpoint for each list. - -> **Note** -> -> Range registrations can not be used for lists that have the -> `tailf:secondary-index` extension, since there is no way for ConfD to -> traverse the registrations in secondary-index order. +
+ +For a given callpoint name, there can only be either one non-range +registration or a number of range registrations that all pertain to the +same list. If a range registration is done after a non-range +registration or vice versa, or if a range registration is done with a +different list path than earlier range registrations, the latest +registration completely replaces the earlier one(s). If we want to +register for the same ranges in different lists, we must thus have a +unique callpoint for each list. + +
+ +
+ +Range registrations can not be used for lists that have the +`tailf:secondary-index` extension, since there is no way for ConfD to +traverse the registrations in secondary-index order. + +
int confd_register_usess_cb( struct confd_daemon_ctx *dx, const struct confd_usess_cbs *ucb); @@ -1959,23 +1984,27 @@ a worker thread would often mean that we allocated a thread that was never used. The `u_opaque` element in the `struct confd_user_info` can be used to manage such allocations. -> **Note** -> -> These callbacks will only be invoked if the daemon has also registered -> other callbacks. Furthermore, as an optimization, ConfD will delay the -> invocation of the `start()` callback until some other callback is -> invoked. This means that if no other callbacks for the daemon are -> invoked for the duration of a user session, neither `start()` nor -> `stop()` will be invoked for that user session. If we want timely -> notification of start and stop for all user sessions, we can subscribe -> to `CONFD_NOTIF_AUDIT` events, see -> [confd_lib_events(3)](confd_lib_events.3.md). - -> **Note** -> -> When we call `confd_register_done()` (see below), the `start()` -> callback (if registered) will be invoked for each user session that -> already exists. +
+ +These callbacks will only be invoked if the daemon has also registered +other callbacks. Furthermore, as an optimization, ConfD will delay the +invocation of the `start()` callback until some other callback is +invoked. This means that if no other callbacks for the daemon are +invoked for the duration of a user session, neither `start()` nor +`stop()` will be invoked for that user session. If we want timely +notification of start and stop for all user sessions, we can subscribe +to `CONFD_NOTIF_AUDIT` events, see +[confd_lib_events(3)](confd_lib_events.3.md). + +
+ +
+ +When we call `confd_register_done()` (see below), the `start()` callback +(if registered) will be invoked for each user session that already +exists. + +
int confd_register_done( struct confd_daemon_ctx *dx); @@ -2369,13 +2398,15 @@ that we want the next request for this list traversal to use the `find_next()` (or `find_next_object()`) callback instead of `get_next()` (or `get_next_object()`). -> **Note** -> -> In the case of list traversal by means of a secondary index, the -> secondary index values must be unique for entry-by-entry traversal -> with `find_next()`/`find_next_object()` to be possible. Thus we can -> not pass `-1` for the `next` parameter in this case if the secondary -> index values are not unique. +
+ +In the case of list traversal by means of a secondary index, the +secondary index values must be unique for entry-by-entry traversal with +`find_next()`/`find_next_object()` to be possible. Thus we can not pass +`-1` for the `next` parameter in this case if the secondary index values +are not unique. + +
*Errors*: CONFD_ERR_PROTOUSAGE, CONFD_ERR_MALLOC, CONFD_ERR_OS, CONFD_ERR_BADTYPE @@ -2557,23 +2588,27 @@ latter is preferable, since we can then combine the final list entries with the end-of-list indication in the reply to a single callback invocation. -> **Note** -> -> When `next` values other than `-1` are used, these must remain valid -> even after the end of the list has been reached, since ConfD may still -> need to issue a new callback request based on an "intermediate" `next` -> value as described above. They can be discarded (e.g. allocated memory -> released) when a new `get_next_object()` or `find_next_object()` -> callback request for the same list in the same transaction has been -> received, or at the end of the transaction. - -> **Note** -> -> In the case of list traversal by means of a secondary index, the -> secondary index values must be unique for entry-by-entry traversal -> with `find_next_object()`/`find_next()` to be possible. Thus we can -> not use `-1` for the `next` element in this case if the secondary -> index values are not unique. +
+ +When `next` values other than `-1` are used, these must remain valid +even after the end of the list has been reached, since ConfD may still +need to issue a new callback request based on an "intermediate" `next` +value as described above. They can be discarded (e.g. allocated memory +released) when a new `get_next_object()` or `find_next_object()` +callback request for the same list in the same transaction has been +received, or at the end of the transaction. + +
+ +
+ +In the case of list traversal by means of a secondary index, the +secondary index values must be unique for entry-by-entry traversal with +`find_next_object()`/`find_next()` to be possible. Thus we can not use +`-1` for the `next` element in this case if the secondary index values +are not unique. + +
*Errors*: CONFD_ERR_PROTOUSAGE, CONFD_ERR_MALLOC, CONFD_ERR_OS, CONFD_ERR_BADTYPE @@ -2628,23 +2663,27 @@ The latter is preferable, since we can then combine the final list entries with the end-of-list indication in the reply to a single callback invocation. -> **Note** -> -> When `next` values other than `-1` are used, these must remain valid -> even after the end of the list has been reached, since ConfD may still -> need to issue a new callback request based on an "intermediate" `next` -> value as described above. They can be discarded (e.g. allocated memory -> released) when a new `get_next_object()` or `find_next_object()` -> callback request for the same list in the same transaction has been -> received, or at the end of the transaction. - -> **Note** -> -> In the case of list traversal by means of a secondary index, the -> secondary index values must be unique for entry-by-entry traversal -> with `find_next_object()`/`find_next()` to be possible. Thus we can -> not use `-1` for the `next` element in this case if the secondary -> index values are not unique. +
+ +When `next` values other than `-1` are used, these must remain valid +even after the end of the list has been reached, since ConfD may still +need to issue a new callback request based on an "intermediate" `next` +value as described above. They can be discarded (e.g. allocated memory +released) when a new `get_next_object()` or `find_next_object()` +callback request for the same list in the same transaction has been +received, or at the end of the transaction. + +
+ +
+ +In the case of list traversal by means of a secondary index, the +secondary index values must be unique for entry-by-entry traversal with +`find_next_object()`/`find_next()` to be possible. Thus we can not use +`-1` for the `next` element in this case if the secondary index values +are not unique. + +
*Errors*: CONFD_ERR_PROTOUSAGE, CONFD_ERR_MALLOC, CONFD_ERR_OS, CONFD_ERR_BADTYPE @@ -2851,13 +2890,15 @@ This function will copy those keys from ConfD (which reads confd.conf) into memory in the library. The parameter `dtx` is a daemon context which is connected through a call to `confd_connect()`. -> **Note** -> -> The function must be called before `confd_register_done()` is called. -> If this is impractical, or if the application doesn't otherwise use a -> daemon context, the equivalent function `maapi_install_crypto_keys()` -> may be more convenient to use, see -> [confd_lib_maapi(3)](confd_lib_maapi.3.md). +
+ +The function must be called before `confd_register_done()` is called. If +this is impractical, or if the application doesn't otherwise use a +daemon context, the equivalent function `maapi_install_crypto_keys()` +may be more convenient to use, see +[confd_lib_maapi(3)](confd_lib_maapi.3.md). + +
## Ncs Service Callbacks @@ -2935,10 +2976,12 @@ All the callbacks receive a property list via the `proplist` and and `num_props` == 0), but it can be used to store and later modify persistent data outside the service model that might be needed. -> **Note** -> -> We must call the `confd_register_done()` function when we are done -> with all registrations for a daemon, see above. +
+ +We must call the `confd_register_done()` function when we are done with +all registrations for a daemon, see above. + +
int ncs_service_reply_proplist( struct confd_trans_ctx *tctx, const struct ncs_name_value *proplist, int num_props); @@ -3048,10 +3091,12 @@ struct confd_valpoint_cb { -> **Note** -> -> We must call the `confd_register_done()` function when we are done -> with all registrations for a daemon, see above. +
+ +We must call the `confd_register_done()` function when we are done with +all registrations for a daemon, see above. + +
See the user guide chapter "Semantic validation" for code examples. The `validate()` callback can return CONFD_OK if all is well, or CONFD_ERROR @@ -3192,10 +3237,12 @@ for sending data to ConfD, there is no need for the application to poll the socket. Note that the control socket must be connected before registration even if the callbacks are not registered. -> **Note** -> -> We must call the `confd_register_done()` function when we are done -> with all registrations for a daemon, see above. +
+ +We must call the `confd_register_done()` function when we are done with +all registrations for a daemon, see above. + +
The `get_log_times()` callback is called by ConfD to find out a) the creation time of the current log and b) the event time of the last @@ -3280,10 +3327,12 @@ the notification as described for the Tagged Value Array format in the [XML STRUCTURES](confd_types.3.md#xml_structures) section of the [confd_types(3)](confd_types.3.md) manual page. -> **Note** -> -> The order of the tags in the array must be the same order as in the -> YANG model. +
+ +The order of the tags in the array must be the same order as in the YANG +model. + +
For example, with this definition at the top level of the YANG module "test": @@ -3346,10 +3395,12 @@ of the notification, in the same form as for the [confd_lib_cdb(3)](confd_lib_cdb.3.md) functions. Giving "/" for the path is equivalent to calling `confd_notification_send()`. -> **Note** -> -> The path must be fully instantiated, i.e. all list nodes in the path -> must have all their keys specified. +
+ +The path must be fully instantiated, i.e. all list nodes in the path +must have all their keys specified. + +
For example, with this definition at the top level of the YANG module "test": @@ -3408,14 +3459,16 @@ could be sent with the following code: -> **Note** -> -> While it is possible to use separate threads to send live and replay -> notifications for a given stream, or to send different streams on a -> given worker socket, this is not recommended. This is because it -> involves rather complex synchronization problems that can only be -> fully solved by the application, in particular in the case where a -> replay switches over to the live feed. +
+ +While it is possible to use separate threads to send live and replay +notifications for a given stream, or to send different streams on a +given worker socket, this is not recommended. This is because it +involves rather complex synchronization problems that can only be fully +solved by the application, in particular in the case where a replay +switches over to the live feed. + +
int confd_notification_replay_complete( struct confd_notification_ctx *nctx); @@ -3511,10 +3564,12 @@ context. If `notify_name` is NULL or the empty string (""), the notification is sent to all management targets. If `ctx_name` is NULL or the empty string (""), the default context ("") is used. -> **Note** -> -> We must call the `confd_register_done()` function when we are done -> with all registrations for a daemon, see above. +
+ +We must call the `confd_register_done()` function when we are done with +all registrations for a daemon, see above. + +
int confd_notification_send_snmp( struct confd_notification_ctx *nctx, const char *notification, struct confd_snmp_varbind *varbinds, @@ -3562,10 +3617,12 @@ callback, one for each target. The `ref` argument (passed from the `confd_notification_send_snmp_inform()` call) allows for tracking the result of multiple notifications with delivery overlap. -> **Note** -> -> We must call the `confd_register_done()` function when we are done -> with all registrations for a daemon, see above. +
+ +We must call the `confd_register_done()` function when we are done with +all registrations for a daemon, see above. + +
int confd_notification_send_snmp_inform( struct confd_notification_ctx *nctx, const char *notification, struct confd_snmp_varbind *varbinds, @@ -3632,10 +3689,12 @@ The `sub_id` element is the subscription id for the notifications. The the section "Receiving and Forwarding Traps" in the chapter "The SNMP gateway" in the Users Guide. -> **Note** -> -> We must call the `confd_register_done()` function when we are done -> with all registrations for a daemon, see above. +
+ +We must call the `confd_register_done()` function when we are done with +all registrations for a daemon, see above. + +
int confd_notification_flush( struct confd_notification_ctx *nctx); @@ -3660,18 +3719,22 @@ specified by the subscription callback and sends it via a socket to ConfD. Push notifications that are received by ConfD are then published to the NETCONF subscribers. -> [!WARNING] -> *Experimental*. The PUSH ON-CHANGE CALLBACKS are not subject to -> libconfd protocol version policy. Non-backwards compatible changes or -> removal may occur in any future release. +
+ +*Experimental*. The PUSH ON-CHANGE CALLBACKS are not subject to libconfd +protocol version policy. Non-backwards compatible changes or removal may +occur in any future release. -> **Note** -> -> ConfD implements a YANG-Push server and the push on-change callbacks -> provide a complementary mechanism for ConfD to publish updates from -> the data managed by data providers. Thus, it is recommended to be -> familiar with YANG-Push (RFC 8641) and YANG Patch (RFC 8072) -> standards. +
+ +
+ +ConfD implements a YANG-Push server and the push on-change callbacks +provide a complementary mechanism for ConfD to publish updates from the +data managed by data providers. Thus, it is recommended to be familiar +with YANG-Push (RFC 8641) and YANG Patch (RFC 8072) standards. + +
int confd_register_push_on_change( struct confd_daemon_ctx *dx, const struct confd_push_on_change_cbs *pcbs); @@ -3706,10 +3769,12 @@ for sending data to ConfD, there is no need for the application to poll the socket. Note that the control socket must be connected before registration. -> **Note** -> -> We must call the `confd_register_done()` function when we are done -> with all registrations for a daemon, see above. +
+ +We must call the `confd_register_done()` function when we are done with +all registrations for a daemon, see above. + +
The `subscribe_on_change()` callback is called by ConfD to initiate a subscription on specified data with specified trigger options passed by @@ -3761,13 +3826,16 @@ The `usid` is the user id corresponding to the user of the NETCONF session. The user id can be used to optionally identify and obtain the user session, which can be used to authorize the push notifications. -> [!WARNING] -> ConfD will always check access rights on the data that is pushed from -> the applications, unless the configuration parameter -> `enableExternalAccessCheck` is set to *true*. If -> `enableExternalAccessCheck` is true and the application sets the -> `CONFD_PATCH_FLAG_AAA_CHECKED` flag, then ConfD will not perform -> access right checks on the received data. +
+ +ConfD will always check access rights on the data that is pushed from +the applications, unless the configuration parameter +`enableExternalAccessCheck` is set to *true*. If +`enableExternalAccessCheck` is true and the application sets the +`CONFD_PATCH_FLAG_AAA_CHECKED` flag, then ConfD will not perform access +right checks on the received data. + +
The optional `xpath_filter` element is the string representation of the XPath filter provided for the subscription to identify a portion of data @@ -3894,11 +3962,14 @@ corresponding to the below macros and their conditions. -> [!WARNING] -> Currently ConfD can not apply an XPath or Subtree filter on the data -> provided in push notifications. If the `CONFD_PATCH_FLAG_FILTER` flag -> is set, ConfD can only filter out the edits with operations that are -> specified in excluded changes. +
+ +Currently ConfD can not apply an XPath or Subtree filter on the data +provided in push notifications. If the `CONFD_PATCH_FLAG_FILTER` flag is +set, ConfD can only filter out the edits with operations that are +specified in excluded changes. + +
The `struct confd_data_edit` structure is defined as: @@ -3987,10 +4058,12 @@ according to the specification of the Tagged Value Array format in the [XML STRUCTURES](confd_types.3.md#xml_structures) section of the [confd_types(3)](confd_types.3.md) manual page. -> **Note** -> -> The order of the tags in the array must be the same order as in the -> YANG model. +
+ +The order of the tags in the array must be the same order as in the YANG +model. + +
The conditional `ndata` must be set to an integer value if `data` is set, according to the number of `struct confd_tag_value_t` instances @@ -4191,10 +4264,12 @@ If the `tailf:opaque` substatement has been used with the made available to the callbacks via the `actionpoint_opaque` element in the `confd_action_ctx` structure. -> **Note** -> -> We must call the `confd_register_done()` function when we are done -> with all registrations for a daemon, see above. +
+ +We must call the `confd_register_done()` function when we are done with +all registrations for a daemon, see above. + +
The `action()` callback receives all the parameters pertaining to the action: The `name` argument is a pointer to the action name as defined @@ -4271,12 +4346,14 @@ callbacks for a range of key values. The `lower`, `upper`, `numkeys`, `fmt`, and remaining parameters are the same as for `confd_register_range_data_cb()`, see above. -> **Note** -> -> This function can not be used for registration of the `command()` or -> `completion()` callbacks - only actions specified in the data model -> are invoked via a keypath that can be used for selection of the -> corresponding callbacks. +
+ +This function can not be used for registration of the `command()` or +`completion()` callbacks - only actions specified in the data model are +invoked via a keypath that can be used for selection of the +corresponding callbacks. + +
void confd_action_set_fd( struct confd_user_info *uinfo, int sock); @@ -4303,9 +4380,11 @@ it must invoke this function in response to the `action()` callback. The `values` argument points to an array of length `nvalues`, populated with the output parameters in the same way as the `params` array above. -> **Note** -> -> This function must only be called for an `action()` callback. +
+ +This function must only be called for an `action()` callback. + +
int confd_action_reply_command( struct confd_user_info *uinfo, char **values, int nvalues); @@ -4315,9 +4394,11 @@ function in response to the `command()` callback. The `values` argument points to an array of length `nvalues`, populated with pointers to NUL-terminated strings. -> **Note** -> -> This function must only be called for a `command()` callback. +
+ +This function must only be called for a `command()` callback. + +
int confd_action_reply_rewrite( struct confd_user_info *uinfo, char **values, int nvalues, char **unhides, @@ -4331,9 +4412,11 @@ to NUL-terminated strings representing the tokens of the new path. The with pointers to NUL-terminated strings representing hide groups to temporarily unhide during evaluation of the show command. -> **Note** -> -> This function must only be called for a `command()` callback. +
+ +This function must only be called for a `command()` callback. + +
int confd_action_reply_rewrite2( struct confd_user_info *uinfo, char **values, int nvalues, char **unhides, @@ -4350,9 +4433,11 @@ argument points to an array of length `nselects`, populated with pointers to confd_rewrite_select structs representing additional select targets. -> **Note** -> -> This function must only be called for a `command()` callback. +
+ +This function must only be called for a `command()` callback. + +
int confd_action_reply_completion( struct confd_user_info *uinfo, struct confd_completion_value *values, @@ -4396,9 +4481,11 @@ set to CONFD_COMPLETION_DEFAULT. CONFD_COMPLETION_DEFAULT cannot be combined with the other completion types, implying the `values` array always must have length `1` which is indicated by `nvalues` setting. -> **Note** -> -> This function must only be called for a `completion()` callback. +
+ +This function must only be called for a `completion()` callback. + +
int confd_action_reply_range_enum( struct confd_user_info *uinfo, char **values, int keysize, int nkeys); @@ -4413,9 +4500,11 @@ the array gives entry1-key1, entry1-key2, ..., entry2-key1, entry2-key2, ... and so on. See the `cli/range_create` example in the bundled examples collection for details. -> **Note** -> -> This function must only be called for a `completion()` callback. +
+ +This function must only be called for a `completion()` callback. + +
void confd_action_seterr( struct confd_user_info *uinfo, const char *fmt); @@ -4483,18 +4572,22 @@ is both enabled via /confdConfig/aaa/authenticationCallback/enabled in `confd.conf` (see [confd.conf(5)](ncs.conf.5.md)) and registered as described here. -> **Note** -> -> If the callback is enabled in `confd.conf` but not registered, or -> invocation keeps failing for some reason, *all* authentication -> attempts will fail. +
+ +If the callback is enabled in `confd.conf` but not registered, or +invocation keeps failing for some reason, *all* authentication attempts +will fail. + +
-> **Note** -> -> This callback can not be used to actually *perform* the -> authentication. If we want to implement the authentication outside of -> ConfD, we need to use PAM or "External" authentication, see the AAA -> chapter in the Admin Guide. +
+ +This callback can not be used to actually *perform* the authentication. +If we want to implement the authentication outside of ConfD, we need to +use PAM or "External" authentication, see the AAA chapter in the Admin +Guide. + +
int confd_register_auth_cb( struct confd_daemon_ctx *dx, const struct confd_auth_cb *acb); @@ -4604,11 +4697,13 @@ The callbacks will only be invoked if they are both enabled via /confdConfig/aaa/authorization/callback/enabled in `confd.conf` (see [confd.conf(5)](ncs.conf.5.md)) and registered as described here. -> **Note** -> -> If the callbacks are enabled in `confd.conf` but no registration has -> been done, or if invocation keeps failing for some reason, *all* -> access checks will be rejected. +
+ +If the callbacks are enabled in `confd.conf` but no registration has +been done, or if invocation keeps failing for some reason, *all* access +checks will be rejected. + +
int confd_register_authorization_cb( struct confd_daemon_ctx *dx, const struct confd_authorization_cbs *acb); @@ -4689,14 +4784,17 @@ struct confd_authorization_ctx { > `CONFD_ACCESS_OP_EXECUTE` > > Execute access. This is used when a command is about to be executed. > -> > [!NOTE] -> > This callback may be invoked with `actx->uinfo == NULL`, meaning -> > that no user session has been established for the user yet. This -> > will occur e.g. when the CLI checks whether a user attempting to log -> > in is allowed to (implicitly) execute the command "request system -> > logout user" (J-CLI) or "logout" (C/I-CLI) when the maximum number -> > of sessions has already been reached (if allowed, the CLI will ask -> > whether the user wants to terminate one of the existing sessions). +>
+> +> This callback may be invoked with `actx->uinfo == NULL`, meaning that +> no user session has been established for the user yet. This will occur +> e.g. when the CLI checks whether a user attempting to log in is +> allowed to (implicitly) execute the command "request system logout +> user" (J-CLI) or "logout" (C/I-CLI) when the maximum number of +> sessions has already been reached (if allowed, the CLI will ask +> whether the user wants to terminate one of the existing sessions). +> +>
`chk_data_access()` > This callback is invoked for data authorization, i.e. it corresponds diff --git a/resources/man/confd_lib_events.3.md b/resources/man/confd_lib_events.3.md index e766087b..5826736f 100644 --- a/resources/man/confd_lib_events.3.md +++ b/resources/man/confd_lib_events.3.md @@ -47,11 +47,13 @@ examples collection illustrates subscription and processing for all these events, and can also be used standalone in a development environment to monitor NSO events. -> **Note** -> -> Any event may allocate memory dynamically inside the -> `struct confd_notification`, thus we must always call -> `confd_free_notification()` after receiving and processing an event. +
+ +Any event may allocate memory dynamically inside the +`struct confd_notification`, thus we must always call +`confd_free_notification()` after receiving and processing an event. + +
## Events @@ -380,11 +382,12 @@ some specific `confd_errno` values: > The user session id given by `usid` does not identify an existing user > session. -> **Note** -> -> If these calls fail (i.e. do not return CONFD_OK), the socket -> descriptor must be closed and a new socket created before the call is -> re-attempted. +
+ +If these calls fail (i.e. do not return CONFD_OK), the socket descriptor +must be closed and a new socket created before the call is re-attempted. + +
int confd_read_notification( int sock, struct confd_notification *n); diff --git a/resources/man/confd_lib_ha.3.md b/resources/man/confd_lib_ha.3.md index 1d929c52..56f8f303 100644 --- a/resources/man/confd_lib_ha.3.md +++ b/resources/man/confd_lib_ha.3.md @@ -51,11 +51,13 @@ cluster. There can only be one HA socket towards NSO, a new call to `confd_ha_connect()` makes NSO close the previous connection and reset the token to the new value. Returns CONFD_OK or CONFD_ERR. -> **Note** -> -> If this call fails (i.e. does not return CONFD_OK), the socket -> descriptor must be closed and a new socket created before the call is -> re-attempted. +
+ +If this call fails (i.e. does not return CONFD_OK), the socket +descriptor must be closed and a new socket created before the call is +re-attempted. + +
int confd_ha_beprimary( int sock, confd_value_t *mynodeid); diff --git a/resources/man/confd_lib_lib.3.md b/resources/man/confd_lib_lib.3.md index 1b051c3b..4e763b90 100644 --- a/resources/man/confd_lib_lib.3.md +++ b/resources/man/confd_lib_lib.3.md @@ -19,12 +19,16 @@ to NSO int confd_load_schemas( const struct sockaddr* srv, int srv_sz); + int confd_load_schemas_mmap( + const struct sockaddr *srv, int srv_sz, void *shm_addr, size_t shm_size, + const char *file_path, int shm_flags); + int confd_load_schemas_list( const struct sockaddr* srv, int srv_sz, int flags, const uint32_t *nshash, const int *nsflags, int num_ns); int confd_mmap_schemas_setup( - void *addr, size_t size, const char *filename, int flags); + void *addr, size_t size, const char *filename, int flags, const struct confd_schema_stats *stats); int confd_mmap_schemas( const char *filename); @@ -68,9 +72,21 @@ to NSO char *confd_hash2str( uint32_t hash); + size_t confd_hash2str_size( + void); + + void confd_hash2str_iterate( + void(cb)(uint32_t, void *opaque); + uint32_t confd_str2hash( const char *str); + size_t confd_str2hash_size( + void); + + void confd_str2hash_iterate( + void(cb)(uint32_t, void *opaque); + struct confd_cs_node *confd_find_cs_root( uint32_t ns); @@ -83,6 +99,19 @@ to NSO struct confd_cs_node *confd_cs_node_cd( const struct confd_cs_node *start, const char *fmt, ...); + int confd_num_mns_maps( + void); + + void confd_mns_maps_iterate( + void *opaque); + + int confd_mns_map_size( + const mount_id_t *mount_id); + + int confd_mns_map_iterate( + const mount_id_t *mount_id, void(cb)(uint32_t nshash, const char *ns, + const char *prefix, const char *xmlns, const char *modname, void *opaque); + enum confd_vtype confd_get_base_type( struct confd_cs_node *node); @@ -95,6 +124,13 @@ to NSO struct confd_type *confd_find_ns_type( uint32_t nshash, const char *name); + unsigned int confd_ns_type_num( + uint32_t nshash); + + void confd_ns_type_iterate( + uint32_t nshash, void(cb)(uint32_t nshash, const char *name, struct confd_type *type, + void *opaque, void *opaque); + struct confd_type *confd_get_leaf_list_type( struct confd_cs_node *node); @@ -274,8 +310,23 @@ over how the socket communicating with NSO is created. We recommend calling `maapi_load_schemas_list()` directly (see [confd_lib_maapi(3)](confd_lib_maapi.3.md)). + int confd_load_schemas_mmap( + const struct sockaddr *srv, int srv_sz, void *shm_addr, size_t shm_size, + const char *file_path, int shm_flags); + +Utility function that uses `maapi_get_schema_stats()`, +`confd_mmap_schemas_setup()` and `maapi_load_schemas()` (see +[confd_lib_maapi(3)](confd_lib_maapi.3.md)) to load schema information +from NSO into a file that can later be used for memory mapping the +schema. + +Use of this utility function is typically not needed as enabling +/ncs-config/enable-shared-memory-schema in +[ncs.conf(5)](ncs.conf.5.md) will maintain a file to be used for +memory mapping of the schema data. + int confd_mmap_schemas_setup( - void *addr, size_t size, const char *filename, int flags); + void *addr, size_t size, const char *filename, int flags, const struct confd_schema_stats *stats); This function sets up for a subsequent call of one of the schema-loading functions (`confd_load_schemas()` etc) to load the schema information @@ -453,6 +504,8 @@ struct confd_nsinfo { uint32_t hash; const char *revision; const char *module; + uint32_t *nsdeps; + int num_nsdeps; }; ``` @@ -637,13 +690,14 @@ this case `maapi_xpath2kpath_th()` must be used to translate the string into a `confd_hkeypath_t`, which can then be used with `CONFD_SET_OBJECTREF()` to create the `confd_value_t` value. -> **Note** -> -> When the resulting value is of one of the C_BUF, C_BINARY, C_LIST, -> C_OBJECTREF, C_OID, C_QNAME, C_HEXSTR, or C_BITBIG `confd_value_t` -> types, the library has allocated memory to hold the value. It is up to -> the user of this function to free the memory using -> `confd_free_value()`. +
+ +When the resulting value is of one of the C_BUF, C_BINARY, C_LIST, +C_OBJECTREF, C_OID, C_QNAME, C_HEXSTR, or C_BITBIG `confd_value_t` +types, the library has allocated memory to hold the value. It is up to +the user of this function to free the memory using `confd_free_value()`. + +
char *confd_val2str_ptr( struct confd_type *type, const confd_value_t *val); @@ -741,13 +795,15 @@ possible internal pointers inside the struct. Typically we use If the held value is of fixed size, e.g. integers, xmltags etc, the `confd_free_value()` function does nothing. -> **Note** -> -> Memory for values received as parameters to callback functions is -> always managed by the library - the application must *not* call -> `confd_free_value()` for those (on the other hand values of the types -> listed above that are received as parameters to a callback function -> must be copied if they are to persist beyond the callback invocation). +
+ +Memory for values received as parameters to callback functions is always +managed by the library - the application must *not* call +`confd_free_value()` for those (on the other hand values of the types +listed above that are received as parameters to a callback function must +be copied if they are to persist beyond the callback invocation). + +
confd_value_t *confd_value_dup_to( const confd_value_t *v, confd_value_t *newv); @@ -817,11 +873,13 @@ This function decrypts `len` bytes of data from `ciphertext` and writes the clear text to the `output` pointer. The `output` pointer must point to an area that is at least `len` bytes long. -> **Note** -> -> One of the functions `confd_install_crypto_keys()` and -> `maapi_install_crypto_keys()` must have been called before -> `confd_decrypt()` can be used. +
+ +One of the functions `confd_install_crypto_keys()` and +`maapi_install_crypto_keys()` must have been called before +`confd_decrypt()` can be used. + +
## User-Defined Types @@ -867,11 +925,13 @@ Connects a stream socket to NSO. The `id` and the `flags` take different values depending on the usage scenario. This is indicated for each individual function that makes use of a stream socket. -> **Note** -> -> If this call fails (i.e. does not return CONFD_OK), the socket -> descriptor must be closed and a new socket created before the call is -> re-attempted. +
+ +If this call fails (i.e. does not return CONFD_OK), the socket +descriptor must be closed and a new socket created before the call is +re-attempted. + +
## Marshalling @@ -1168,13 +1228,14 @@ A call of `confd_trans_seterr_extended_info()` to populate the -> **Note** -> -> The toplevel elements in the `confd_tag_value_t` array *must* have the -> `ns` element of the `struct xml_tag` set. The -> `CONFD_SET_TAG_XMLBEGIN()` macro will set this element, but for -> toplevel leaf elements the `CONFD_SET_TAG_NS()` macro needs to be -> used, as shown above. +
+ +The toplevel elements in the `confd_tag_value_t` array *must* have the +`ns` element of the `struct xml_tag` set. The `CONFD_SET_TAG_XMLBEGIN()` +macro will set this element, but for toplevel leaf elements the +`CONFD_SET_TAG_NS()` macro needs to be used, as shown above. + +
The \ section resulting from the above would look like this: diff --git a/resources/man/confd_lib_maapi.3.md b/resources/man/confd_lib_maapi.3.md index cd9b84fc..754426d1 100644 --- a/resources/man/confd_lib_maapi.3.md +++ b/resources/man/confd_lib_maapi.3.md @@ -38,6 +38,12 @@ connecting to NCS int maapi_get_schema_file_path( int sock, char **buf); + int maapi_get_schema_file_path2( + int sock, char **buf); + + int maapi_get_schema_stats( + int sock, const uint32_t *nshash, const int *nsflags, int num_ns, struct confd_schema_stats *stats); + int maapi_close( int sock); @@ -468,6 +474,9 @@ connecting to NCS int sock, const char *msg, confd_value_t *type, confd_value_t *level, const char *fmt, ...); + int maapi_ncs_get_template_variables( + int sock, const char *template_name, int type, int *num_variables, char ***variables); + int maapi_report_progress( int sock, int thandle, enum confd_progress_verbosity verbosity, const char *msg); @@ -881,11 +890,13 @@ closed. The application has to connect to NCS before it can interact with NCS. -> **Note** -> -> If this call fails (i.e. does not return CONFD_OK), the socket -> descriptor must be closed and a new socket created before the call is -> re-attempted. +
+ +If this call fails (i.e. does not return CONFD_OK), the socket +descriptor must be closed and a new socket created before the call is +re-attempted. + +
*Errors*: CONFD_ERR_MALLOC, CONFD_ERR_OS @@ -979,6 +990,16 @@ memory schema support has not been enabled, or if the creation of the schema file failed, the function returns CONFD_ERR with `confd_errno` set to CONFD_ERR_NOEXISTS. +*Errors*: CONFD_ERR_MALLOC, CONFD_ERR_OS, CONFD_ERR_NOEXISTS + + int maapi_get_schema_stats( + int sock, const uint32_t *nshash, const int *nsflags, int num_ns, struct confd_schema_stats *stats); + +This function will get schema information for the shared memory mapping +file, which can then be passed to `confd_mmap_schemas_setup()` (see +[confd_lib_lib(3)](confd_lib_lib.3.md)). If the call is successful, +`stats` is filled with schema information else its state is undefined. + *Errors*: CONFD_ERR_MALLOC, CONFD_ERR_OS, CONFD_ERR_NOEXISTS int maapi_close( @@ -1371,12 +1392,14 @@ the candidate is committed to running. To set only the "Label", give If both `label` and `comment` are NULL, the function does exactly the same as `maapi_candidate_confirmed_commit_persistent()`. -> **Note** -> -> To ensure that the "Label" and/or "Comment" are stored in the rollback -> file in all cases when doing a confirmed commit, they must be given -> both with the confirmed commit (using this function) and with the -> confirming commit (using `maapi_candidate_commit_info()`). +
+ +To ensure that the "Label" and/or "Comment" are stored in the rollback +file in all cases when doing a confirmed commit, they must be given both +with the confirmed commit (using this function) and with the confirming +commit (using `maapi_candidate_commit_info()`). + +
If `confd_errno` is CONFD_ERR_NOEXISTS it means that there is an ongoing persistent confirmed commit, but `persist_id` didn't give the right @@ -1409,13 +1432,15 @@ only the "Label", give `comment` as NULL, and to set only the "Comment", give `label` as NULL. If both `label` and `comment` are NULL, the function does exactly the same as `maapi_candidate_commit_persistent()`. -> **Note** -> -> To ensure that the "Label" and/or "Comment" are stored in the rollback -> file in all cases when doing a confirmed commit, they must be given -> both with the confirmed commit (using -> `maapi_candidate_confirmed_commit_info()`) and with the confirming -> commit (using this function). +
+ +To ensure that the "Label" and/or "Comment" are stored in the rollback +file in all cases when doing a confirmed commit, they must be given both +with the confirmed commit (using +`maapi_candidate_confirmed_commit_info()`) and with the confirming +commit (using this function). + +
If `confd_errno` is CONFD_ERR_NOEXISTS it means that there is an ongoing persistent confirmed commit, but `persist_id` didn't give the right @@ -1712,14 +1737,16 @@ eventually commits or aborts. A call to `maapi_apply_trans()` must also eventually be followed by a call to `maapi_finish_trans()` which will terminate the transaction. -> **Note** -> -> For a readonly transaction, i.e. one started with `readwrite` == -> `CONFD_READ`, or for a read-write transaction where we haven't -> actually done any writes, we do not need to call any of the -> validate/prepare/commit/abort or apply functions, since there is -> nothing for them to do. Calling `maapi_finish_trans()` to terminate -> the transaction is sufficient. +
+ +For a readonly transaction, i.e. one started with `readwrite` == +`CONFD_READ`, or for a read-write transaction where we haven't actually +done any writes, we do not need to call any of the +validate/prepare/commit/abort or apply functions, since there is nothing +for them to do. Calling `maapi_finish_trans()` to terminate the +transaction is sufficient. + +
The parameter `keepopen` can optionally be set to `1`, then the changes to the transaction are not discarded if validation fails. This feature @@ -2132,10 +2159,12 @@ before any call to `maapi_get_next()`, `maapi_get_objects()` or `maapi_find_next()`. In this case, `secondary_index` must point to a NUL-terminated string that is valid throughout the iteration. -> **Note** -> -> ConfD will not sort the uncommitted rows. In this particular case, -> setting the `secondary_index` element will not work. +
+ +ConfD will not sort the uncommitted rows. In this particular case, +setting the `secondary_index` element will not work. + +
The list can be filtered by setting the `xpath_expr` field of the `struct maapi_cursor` to an XPath expression - this must be done after @@ -2426,19 +2455,21 @@ the `confd_value_t` value element is given as follows: - Keys to select list entries can be given with their values. -> **Note** -> -> When we use C_PTR, we need to take special care to free any allocated -> memory. When we use C_NOEXISTS and the value is stored in the array, -> we can just use `confd_free_value()` regardless of the type, since the -> `confd_value_t` has the type information. But with C_PTR, only the -> actual value is stored in the pointed-to variable, just as for -> `maapi_get_buf_elem()`, `maapi_get_binary_elem()`, etc, and we need to -> free the memory specifically allocated for the types listed in the -> description of `maapi_get_elem()` above. The details of how to do this -> are not given for the `maapi_get_xxx_elem()` functions here, but it is -> the same as for the corresponding `cdb_get_xxx()` functions, see -> [confd_lib_cdb(3)](confd_lib_cdb.3.md). +
+ +When we use C_PTR, we need to take special care to free any allocated +memory. When we use C_NOEXISTS and the value is stored in the array, we +can just use `confd_free_value()` regardless of the type, since the +`confd_value_t` has the type information. But with C_PTR, only the +actual value is stored in the pointed-to variable, just as for +`maapi_get_buf_elem()`, `maapi_get_binary_elem()`, etc, and we need to +free the memory specifically allocated for the types listed in the +description of `maapi_get_elem()` above. The details of how to do this +are not given for the `maapi_get_xxx_elem()` functions here, but it is +the same as for the corresponding `cdb_get_xxx()` functions, see +[confd_lib_cdb(3)](confd_lib_cdb.3.md). + +
All elements have the same position in the array after the call, in order to simplify extraction of the values - this means that optional @@ -2449,11 +2480,13 @@ only indication of a non-existing value is that the destination variable has not been modified - it's up to the application to set it to some "impossible" value before the call when optional leafs are read. -> **Note** -> -> Selection of a list entry by its "instance integer", which can be done -> with `cdb_get_values()` by using C_CDBBEGIN, can *not* be done with -> `maapi_get_values()` +
+ +Selection of a list entry by its "instance integer", which can be done +with `cdb_get_values()` by using C_CDBBEGIN, can *not* be done with +`maapi_get_values()` + +
*Errors*: CONFD_ERR_MALLOC, CONFD_ERR_OS, CONFD_ERR_NOSESSION, CONFD_ERR_BADPATH, CONFD_ERR_BADTYPE, CONFD_ERR_NOEXISTS, @@ -2841,11 +2874,13 @@ struct ncs_name_value { The `flags` parameter is currently unused and should be given as 0. -> **Note** -> -> If this function is called under FASTMAP it will have the same -> behavior as the corresponding FASTMAP function -> `maapi_shared_ncs_apply_template()`. +
+ +If this function is called under FASTMAP it will have the same behavior +as the corresponding FASTMAP function +`maapi_shared_ncs_apply_template()`. + +
*Errors*: CONFD_ERR_MALLOC, CONFD_ERR_OS, CONFD_ERR_NOSESSION, CONFD_ERR_ACCESS_DENIED, CONFD_ERR_NOT_WRITABLE, CONFD_ERR_BADPATH, @@ -2966,29 +3001,30 @@ entry. The `from` path must be pre-formatted, e.g. using `confd_format_keypath()`, whereas the destination path is formatted by this function. -> **Note** -> -> The data models for the source and destination trees must match - i.e. -> they must either be identical, or the data model for the source tree -> must be a proper subset of the data model for the destination tree. -> This is always fulfilled when copying from one entry to another in a -> list, or if both source and destination tree have been defined via -> YANG `uses` statements referencing the same `grouping` definition. If -> a data model mismatch is detected, e.g. an existing data node in the -> source tree does not exist in the destination data model, or an -> existing leaf in the source tree has a value that is incompatible with -> the type of the leaf in the destination data model, -> `maapi_copy_tree()` will return CONFD_ERR with `confd_errno` set to -> CONFD_ERR_BADPATH. -> -> To provide further explanation, a tree is a proper subset of another -> tree if it has less information than the other. For example, a tree -> with the leaves a,b,c is a proper subset of a tree with the leaves -> a,b,c,d,e. It is important to note that it is less information and not -> different information. Therefore, a tree with different default values -> than another tree is not a proper subset, or, a tree with an -> non-presence container can not be a proper subset of a tree with a -> presence container. +
+ +The data models for the source and destination trees must match - i.e. +they must either be identical, or the data model for the source tree +must be a proper subset of the data model for the destination tree. This +is always fulfilled when copying from one entry to another in a list, or +if both source and destination tree have been defined via YANG `uses` +statements referencing the same `grouping` definition. If a data model +mismatch is detected, e.g. an existing data node in the source tree does +not exist in the destination data model, or an existing leaf in the +source tree has a value that is incompatible with the type of the leaf +in the destination data model, `maapi_copy_tree()` will return CONFD_ERR +with `confd_errno` set to CONFD_ERR_BADPATH. + +To provide further explanation, a tree is a proper subset of another +tree if it has less information than the other. For example, a tree with +the leaves a,b,c is a proper subset of a tree with the leaves a,b,c,d,e. +It is important to note that it is less information and not different +information. Therefore, a tree with different default values than +another tree is not a proper subset, or, a tree with an non-presence +container can not be a proper subset of a tree with a presence +container. + +
*Errors*: CONFD_ERR_MALLOC, CONFD_ERR_OS, CONFD_ERR_NOSESSION, CONFD_ERR_ACCESS_DENIED, CONFD_ERR_NOT_WRITABLE, CONFD_ERR_BADPATH @@ -4128,11 +4164,13 @@ as a string, and the socket is a maapi socket obtained using `maapi_connect()`. On success, the function returns the number of connections that were closed. -> **Note** -> -> ConfD will close all its sockets with remote address `address`, -> *except* HA connections. For HA use `confd_ha_secondary_dead()` or an -> HA state transition. +
+ +ConfD will close all its sockets with remote address `address`, *except* +HA connections. For HA use `confd_ha_secondary_dead()` or an HA state +transition. + +
*Errors*: CONFD_ERR_MALLOC, CONFD_ERR_OS, CONFD_ERR_BADTYPE, CONFD_ERR_UNAVAILABLE @@ -4310,15 +4348,17 @@ the function `maapi_save_config_result()`. The stream socket must be connected within 10 seconds after the id is received. -> **Note** -> -> The `maapi_save_config()` function can not be used with an attached -> transaction in a data callback (see -> [confd_lib_dp(3)](confd_lib_dp.3.md)), since it requires active -> participation by the transaction manager, which is blocked waiting for -> the callback to return. However it is possible to use it with a -> transaction started via `maapi_start_trans_in_trans()` with the -> attached transaction as backend. +
+ +The `maapi_save_config()` function can not be used with an attached +transaction in a data callback (see +[confd_lib_dp(3)](confd_lib_dp.3.md)), since it requires active +participation by the transaction manager, which is blocked waiting for +the callback to return. However it is possible to use it with a +transaction started via `maapi_start_trans_in_trans()` with the attached +transaction as backend. + +
*Errors*: CONFD_ERR_MALLOC, CONFD_ERR_OS, CONFD_ERR_BAD_TYPE @@ -4349,11 +4389,13 @@ meanings as for `maapi_save_config()`. If the name of the file ends in .gz (or .Z) then the file is assumed to be gzipped, and will be uncompressed as it is loaded. -> **Note** -> -> If you use a relative pathname for `filename`, it is taken as relative -> to the working directory of the ConfD daemon, i.e. the directory where -> the daemon was started. +
+ +If you use a relative pathname for `filename`, it is taken as relative +to the working directory of the ConfD daemon, i.e. the directory where +the daemon was started. + +
By default the complete configuration (as allowed by the user of the current transaction) is deleted before the file is loaded. To merge the @@ -4406,17 +4448,18 @@ The other `flags` parameters are the same as for `maapi_save_config()`, however the flags `MAAPI_CONFIG_WITH_SERVICE_META`, `MAAPI_CONFIG_NO_PARENTS`, and `MAAPI_CONFIG_CDB_ONLY` are ignored. -> **Note** -> -> The `maapi_load_config()` function can not be used with an attached -> transaction in a data callback (see -> [confd_lib_dp(3)](confd_lib_dp.3.md)), since it requires active -> participation by the transaction manager, which is blocked waiting for -> the callback to return. However it is possible to use it with a -> transaction started via `maapi_start_trans_in_trans()` with the -> attached transaction as backend, writing the changes to the attached -> transaction by invoking `maapi_apply_trans()` for the -> "trans-in-trans". +
+ +The `maapi_load_config()` function can not be used with an attached +transaction in a data callback (see +[confd_lib_dp(3)](confd_lib_dp.3.md)), since it requires active +participation by the transaction manager, which is blocked waiting for +the callback to return. However it is possible to use it with a +transaction started via `maapi_start_trans_in_trans()` with the attached +transaction as backend, writing the changes to the attached transaction +by invoking `maapi_apply_trans()` for the "trans-in-trans". + +
*Errors*: CONFD_ERR_MALLOC, CONFD_ERR_OS, CONFD_ERR_BADTYPE, CONFD_ERR_BADPATH, CONFD_ERR_BAD_CONFIG, CONFD_ERR_ACCESS_DENIED, @@ -4432,11 +4475,13 @@ The `th` and `flags` parameters are the same as for An optional `chroot` path can be given. -> **Note** -> -> The same restriction as for `maapi_load_config()` regarding an -> attached transaction in a data callback applies also to -> `maapi_load_config_cmds()` +
+ +The same restriction as for `maapi_load_config()` regarding an attached +transaction in a data callback applies also to +`maapi_load_config_cmds()` + +
*Errors*: CONFD_ERR_MALLOC, CONFD_ERR_OS, CONFD_ERR_BADTYPE, CONFD_ERR_BADPATH, CONFD_ERR_BAD_CONFIG, CONFD_ERR_ACCESS_DENIED, @@ -4485,11 +4530,13 @@ configuration load was successful we use the function The stream socket must be connected within 10 seconds after the id is received. -> **Note** -> -> The same restriction as for `maapi_load_config()` regarding an -> attached transaction in a data callback applies also to -> `maapi_load_config_stream()` +
+ +The same restriction as for `maapi_load_config()` regarding an attached +transaction in a data callback applies also to +`maapi_load_config_stream()` + +
*Errors*: CONFD_ERR_MALLOC, CONFD_ERR_OS, CONFD_ERR_BADTYPE, CONFD_ERR_PROTOUSAGE, CONFD_ERR_EXTERNAL @@ -4618,14 +4665,15 @@ function. -> **Note** -> -> A call to `maapi_get_stream_progress()` does not return until the -> number of bytes read has increased from the previous call (or if there -> is an error). This means that the above code does not imply -> busy-looping, but also that if the code was to call -> `maapi_get_stream_progress()` when `n_read` == `n_written`, the result -> would be a deadlock. +
+ +A call to `maapi_get_stream_progress()` does not return until the number +of bytes read has increased from the previous call (or if there is an +error). This means that the above code does not imply busy-looping, but +also that if the code was to call `maapi_get_stream_progress()` when +`n_read` == `n_written`, the result would be a deadlock. + +
*Errors*: CONFD_ERR_MALLOC, CONFD_ERR_OS, CONFD_ERR_NOEXISTS @@ -5012,11 +5060,13 @@ CONFD_ERR_STALE_INSTANCE, CONFD_ERR_BADTYPE, CONFD_ERR_EXTERNAL Calling this function at any point before the call of `maapi_commit_upgrade()` will abort the upgrade. -> **Note** -> -> `maapi_abort_upgrade()` should *not* be called if any of the three -> previous functions fail - in that case, ConfD will do an internal -> abort of the upgrade. +
+ +`maapi_abort_upgrade()` should *not* be called if any of the three +previous functions fail - in that case, ConfD will do an internal abort +of the upgrade. + +
## Confd Daemon Control @@ -5146,12 +5196,14 @@ together if more than one: -> **Note** -> -> It is not possible to rebind sockets for northbound listeners during -> the transition from start phase 1 to start phase 2. If this is -> attempted, the call will fail (and do nothing) with `confd_errno` set -> to CONFD_ERR_BADSTATE. +
+ +It is not possible to rebind sockets for northbound listeners during the +transition from start phase 1 to start phase 2. If this is attempted, +the call will fail (and do nothing) with `confd_errno` set to +CONFD_ERR_BADSTATE. + +
*Errors*: CONFD_ERR_MALLOC, CONFD_ERR_OS, CONFD_ERR_BADSTATE diff --git a/resources/man/confd_types.3.md b/resources/man/confd_types.3.md index 9e760381..56be87bc 100644 --- a/resources/man/confd_types.3.md +++ b/resources/man/confd_types.3.md @@ -1260,17 +1260,18 @@ description below. The choice of `confd_vtype` to use for the value representation can be whatever suits the actual data values best, with one exception: -> **Note** -> -> The C_LIST `confd_vtype` value can *not* be used for a leaf that is a -> key in a YANG list. The "normal" C_LIST usage is only for -> representation of leaf-lists, and a leaf-list can of course not be a -> key. Thus the ConfD code is not prepared to handle this kind of -> "value" for a key. It is a strong recommendation to *never* use C_LIST -> for a user-defined type, since even if the type is not initially used -> for key leafs, subsequent development may see a need for this, at -> which point it may be cumbersome to change to a different -> representation. +
+ +The C_LIST `confd_vtype` value can *not* be used for a leaf that is a +key in a YANG list. The "normal" C_LIST usage is only for representation +of leaf-lists, and a leaf-list can of course not be a key. Thus the +ConfD code is not prepared to handle this kind of "value" for a key. It +is a strong recommendation to *never* use C_LIST for a user-defined +type, since even if the type is not initially used for key leafs, +subsequent development may see a need for this, at which point it may be +cumbersome to change to a different representation. + +
The example uses C_INT32, C_IPV4PREFIX, and C_IPV6PREFIX for the value representation of the respective types, but in many cases the opaque @@ -1313,6 +1314,14 @@ callback functions that are defined in the `struct confd_type`: ``` c struct confd_type { + /* primitive type */ + enum confd_type_id id; + + /* namespace of the type*/ + uint32_t ns; + /* name of the type */ + char *name; + /* If a derived type point at the parent */ struct confd_type *parent; @@ -1387,12 +1396,14 @@ auxiliary (static) data needed by the functions (on invocation they can reference it as self-\>opaque). The `parent` and `defval` elements are not used in this context, and should be NULL. -> **Note** -> -> The `str_to_val()` function *must* allocate space (using e.g. -> malloc(3)) for the actual data value for those confd_value_t types -> that are listed as having allocated data above, i.e. C_BUF, C_QNAME, -> C_LIST, C_OBJECTREF, C_OID, C_BINARY, and C_HEXSTR. +
+ +The `str_to_val()` function *must* allocate space (using e.g. malloc(3)) +for the actual data value for those confd_value_t types that are listed +as having allocated data above, i.e. C_BUF, C_QNAME, C_LIST, +C_OBJECTREF, C_OID, C_BINARY, and C_HEXSTR. + +
We make the implementation available to ConfD by creating one or more shared objects (.so files) containing the above callback functions. Each @@ -1426,26 +1437,29 @@ These structures are then used by ConfD to locate the implementation of a given type, by searching for a `typepoint` string that matches the `tailf:typepoint` argument in the YANG data model. -> **Note** -> -> Since our callbacks are executed directly by the ConfD daemon, it is -> critically important that they do not have a negative impact on the -> daemon. No other processing can be done by ConfD while the callbacks -> are executed, and e.g. a NULL pointer dereference in one of the -> callbacks will cause ConfD to crash. Thus they should be simple, -> purely algorithmic functions, never referencing any external -> resources. - -> **Note** -> -> When user-defined types are present, the ConfD daemon also needs to -> load the libconfd.so shared library, otherwise used only by -> applications. This means that either this library must be in one of -> the system directories that are searched by the OS runtime loader -> (typically /lib and /usr/lib), or its location must be given by -> setting the LD_LIBRARY_PATH environment variable before starting -> ConfD, or the default location \$CONFD_DIR/lib is used, where -> \$CONFD_DIR is the installation directory of ConfD. +
+ +Since our callbacks are executed directly by the ConfD daemon, it is +critically important that they do not have a negative impact on the +daemon. No other processing can be done by ConfD while the callbacks are +executed, and e.g. a NULL pointer dereference in one of the callbacks +will cause ConfD to crash. Thus they should be simple, purely +algorithmic functions, never referencing any external resources. + +
+ +
+ +When user-defined types are present, the ConfD daemon also needs to load +the libconfd.so shared library, otherwise used only by applications. +This means that either this library must be in one of the system +directories that are searched by the OS runtime loader (typically /lib +and /usr/lib), or its location must be given by setting the +LD_LIBRARY_PATH environment variable before starting ConfD, or the +default location \$CONFD_DIR/lib is used, where \$CONFD_DIR is the +installation directory of ConfD. + +
The above is enough for ConfD to use the types that we have defined, but the libconfd library can also do local string\<-\>value translation if @@ -1567,6 +1581,8 @@ There is one tree for each namespace that has toplevel elements. #define CS_NODE_CMP_USER 3 #define CS_NODE_CMP_UNSORTED 4 + typedef struct xml_tag mount_id_t; + struct confd_cs_node_info { uint32_t *keys; int minOccurs; @@ -1578,6 +1594,10 @@ There is one tree for each namespace that has toplevel elements. int flags; uint8_t cmp; struct confd_cs_meta_data *meta_data; + /* not hiding under CONFD_C_PRODUCT_CONFD/CONFD_C_PRODUCT_NSO to avoid + issues in mixed compilation enviroments where libconfd.a is used for + both ConfD and NSO */ + mount_id_t mount_id; }; struct confd_cs_meta_data { @@ -2308,9 +2328,12 @@ types. They are defined in the > An SNMP OBJECT IDENTIFIER (OID). This is a sequence of integers which > identifies an object instance for example "1.3.6.1.4.1.24961.1". > -> > [!NOTE] -> > The `tailf:value-length` restriction is measured in integer elements -> > for `object-identifier` and `object-identifier-128`. +>
+> +> The `tailf:value-length` restriction is measured in integer elements +> for `object-identifier` and `object-identifier-128`. +> +>
> > - `value.type` = C_OID > @@ -2362,9 +2385,12 @@ types. They are defined in the > sequence octets, each octet represented by two hexadecimal digits. > Octets are separated by colons. > -> > [!NOTE] -> > The `tailf:value-length` restriction is measured in number of octets -> > for `phys-address`. +>
+> +> The `tailf:value-length` restriction is measured in number of octets +> for `phys-address`. +> +>
> > - `value.type` = C_BINARY > @@ -2402,9 +2428,12 @@ types. They are defined in the > A hexadecimal string with octets represented as hex digits separated > by colons. > -> > [!NOTE] -> > The `tailf:value-length` restriction is measured in number of octets -> > for `hex-string`. +>
+> +> The `tailf:value-length` restriction is measured in number of octets +> for `hex-string`. +> +>
> > - `value.type` = C_HEXSTR > @@ -2692,9 +2721,12 @@ ConfD. `tailf:octet-list` > A list of dot-separated octets for example "192.168.255.1.0". > -> > [!NOTE] -> > The `tailf:value-length` restriction is measured in number of octets -> > for `octet-list`. +>
+> +> The `tailf:value-length` restriction is measured in number of octets +> for `octet-list`. +> +>
> > - `value.type` = C_BINARY > @@ -2708,9 +2740,12 @@ ConfD. > A list of colon-separated hexa-decimal octets for example > "4F:4C:41:71". > -> > [!NOTE] -> > The `tailf:value-length` restriction is measured in octets of binary -> > data for `hex-list`. +>
+> +> The `tailf:value-length` restriction is measured in octets of binary +> data for `hex-list`. +> +>
> > - `value.type` = C_BINARY > @@ -2762,8 +2797,11 @@ ConfD. > encrypting passwords for various UNIX systems, e.g. > > -> > [!NOTE] -> > The `pattern` restriction can not be used with this type. +>
+> +> The `pattern` restriction can not be used with this type. +> +>
> > - `value.type` = C_BUF > @@ -2899,8 +2937,11 @@ ConfD. > string. For details, see the description of the encryptedStrings > configurable in the [confd.conf(5)](ncs.conf.5.md) manual page. > -> > [!NOTE] -> > The `pattern` restriction can not be used with this type. +>
+> +> The `pattern` restriction can not be used with this type. +> +>
> > - `value.type` = C_BUF > diff --git a/resources/man/mib_annotations.5.md b/resources/man/mib_annotations.5.md index bbf561bd..db97a0b7 100644 --- a/resources/man/mib_annotations.5.md +++ b/resources/man/mib_annotations.5.md @@ -99,4 +99,3 @@ An example of a MIB annotation file. ## See Also The NSO User Guide -> diff --git a/resources/man/ncs-installer.1.md b/resources/man/ncs-installer.1.md index 965fcc51..d87eea51 100644 --- a/resources/man/ncs-installer.1.md +++ b/resources/man/ncs-installer.1.md @@ -108,11 +108,13 @@ a "system installation", suitable for deployment. > > (such as the `tailf-hcc` package). If no such packages are used, the > > file can be removed. > -> > [!NOTE] -> > When the `--run-as-user` option is used, all OS commands executed by -> > NCS will also run as the given user, rather than as the user -> > specified for custom CLI commands (e.g. through clispec -> > definitions). +>
+> +> When the `--run-as-user` option is used, all OS commands executed by +> NCS will also run as the given user, rather than as the user specified +> for custom CLI commands (e.g. through clispec definitions). +> +>
`[ --keep-ncs-setup ]` > The `ncs-setup` command is not usable in a "system installation", and diff --git a/resources/man/ncs-netsim.1.md b/resources/man/ncs-netsim.1.md index b4125726..1ddf3787 100644 --- a/resources/man/ncs-netsim.1.md +++ b/resources/man/ncs-netsim.1.md @@ -70,14 +70,17 @@ that acts as a NETCONF server, a Cisco CLI engine, or an SNMP agent. > network. This command can be given multiple times. The mandatory > parameters are the same as for `create-network`. > -> > [!NOTE] -> > If we have already started NCS with an XML initialization file for -> > the existing network, an updated initialization file will not take -> > effect unless we remove the CDB database files, loosing all NCS -> > configuration. But we can replace the original initialization data -> > with data for the complete new network when we have run -> > `add-to-network`, by using `ncs_load` while NCS is running, e.g. -> > like this: +>
+> +> If we have already started NCS with an XML initialization file for the +> existing network, an updated initialization file will not take effect +> unless we remove the CDB database files, loosing all NCS +> configuration. But we can replace the original initialization data +> with data for the complete new network when we have run +> `add-to-network`, by using `ncs_load` while NCS is running, e.g. like +> this: +> +>
> >
> diff --git a/resources/man/ncs-project-update.1.md b/resources/man/ncs-project-update.1.md index 72d04278..1e783b3b 100644 --- a/resources/man/ncs-project-update.1.md +++ b/resources/man/ncs-project-update.1.md @@ -48,10 +48,8 @@ compiling the packages and to setup any netsim devices. > *setup.mk* files. `--ncs-min-version` -> `--ncs-min-version-non-strict` -> `--use-bundle-packages` > Update using the packages defined in the bundle section. diff --git a/resources/man/ncs-setup.1.md b/resources/man/ncs-setup.1.md index f9212b73..ce9f3d65 100644 --- a/resources/man/ncs-setup.1.md +++ b/resources/man/ncs-setup.1.md @@ -25,13 +25,15 @@ created. Using the `--netsim-dir` and `--package` options, initial environments for using NCS towards simulated devices, real devices, or a combination thereof can be created. -> **Note** -> -> This command is not included by default in a "system install" of NCS -> (see [ncs-installer(1)](ncs-installer.1.md)), since it is not usable -> in such an installation. The (single) execution environment is created -> by the NCS installer when it is invoked with the `--system-install` -> option. +
+ +This command is not included by default in a "system install" of NCS +(see [ncs-installer(1)](ncs-installer.1.md)), since it is not usable +in such an installation. The (single) execution environment is created +by the NCS installer when it is invoked with the `--system-install` +option. + +
## Options @@ -71,10 +73,13 @@ combination thereof can be created. > are found under \$NCS_DIR/packages/neds we can just provide the name > of the NED. We can also give the path to a NED package. > -> > [!NOTE] -> > The script also accepts the alias `--ned-package` (to be backwards -> > compatible). Both options do the same thing, create links to your -> > package regardless of what kind of package it is. +>
+> +> The script also accepts the alias `--ned-package` (to be backwards +> compatible). Both options do the same thing, create links to your +> package regardless of what kind of package it is. +> +>
> > To setup NCS to manage Juniper and Cisco routers we execute: > diff --git a/resources/man/ncs.conf.5.md b/resources/man/ncs.conf.5.md index 1142259a..d731c9e7 100644 --- a/resources/man/ncs.conf.5.md +++ b/resources/man/ncs.conf.5.md @@ -202,9 +202,11 @@ how they relate to each other. > client processes that are allowed to connect to the IPC listener > sockets. -/ncs-config/enable-shared-memory-schema (boolean) \[true\] +/ncs-config/enable-shared-memory-schema (boolean \| c \| java \| python) \[true\] +> This parameter may be given multiple times. +> > If set to 'true', then a C program will be started that loads the -> schema into shared memory (which then can be accessed by e.g Python) +> schema into a memory mappable file. /ncs-config/shared-memory-schema-path (string) > Path to the shared memory file holding the schema. If left @@ -393,7 +395,6 @@ how they relate to each other. > searches for initialization files. /ncs-config/cdb/persistence/format (in-memory-v1 \| on-demand-v1) \[in-memory-v1\] -> /ncs-config/cdb/persistence/db-statistics (disabled \| enabled) \[disabled\] > If set to 'enabled', underlying database produces internal statistics @@ -548,7 +549,6 @@ how they relate to each other. > which will be used to encrypt any strings. /ncs-config/encrypted-strings/key-rotation/generation (int16) -> /ncs-config/encrypted-strings/key-rotation/AESCFB128 > In the AESCFB128 case one 128 bits (16 bytes) key and a random initial @@ -585,7 +585,7 @@ how they relate to each other. > of the types ianach:crypt-hash, tailf:sha-256-digest-string, and > tailf:sha-512-digest-string. -/ncs-config/crypt-hash/algorithm (md5 \| sha-256 \| sha-512) \[md5\] +/ncs-config/crypt-hash/algorithm (md5 \| sha-256 \| sha-512) \[sha-512\] > algorithm can be set to one of the values 'md5', 'sha-256', or > 'sha-512', to choose the corresponding hash algorithm for hashing of > cleartext input for the ianach:crypt-hash type. @@ -1033,10 +1033,8 @@ how they relate to each other. > rotated. Log filenames are reused when five logs have been exhausted. /ncs-config/logs/error-log/debug/enabled (boolean) \[false\] -> /ncs-config/logs/error-log/debug/level (uint16) \[2\] -> /ncs-config/logs/error-log/debug/tag (string) > This parameter may be given multiple times. @@ -1058,7 +1056,6 @@ how they relate to each other. > The directory path to the location of the progress trace files. /ncs-config/logs/external/enabled (boolean) \[false\] -> /ncs-config/logs/external/command (string) > This parameter is mandatory. @@ -1160,7 +1157,6 @@ how they relate to each other. > setting can be smaller than the number of logical processors. /ncs-config/transaction-limits/scheduling-mode (relaxed \| strict) \[relaxed\] -> /ncs-config/parser-limits > Parameters for limiting parsing of XML data. @@ -1752,10 +1748,8 @@ how they relate to each other. > machine. /ncs-config/cli/ssh/extra-listen/ip (ipv4-address \| ipv6-address) -> /ncs-config/cli/ssh/extra-listen/port (port-number) -> /ncs-config/cli/ssh/ha-primary-listen > When /ncs-config/ha/enable or /ncs-config/ha-raft/enable is set to @@ -1766,10 +1760,8 @@ how they relate to each other. > terminate any ongoing traffic. /ncs-config/cli/ssh/ha-primary-listen/ip (ipv4-address \| ipv6-address) -> /ncs-config/cli/ssh/ha-primary-listen/port (port-number) -> /ncs-config/cli/top-level-cmds-in-sub-mode (boolean) \[false\] > topLevelCmdsInSubMode is either 'true' or 'false'. If set to 'true' @@ -2083,7 +2075,6 @@ how they relate to each other. > in the ncs.cli file. /ncs-config/cli/space-completion/enabled (boolean) -> /ncs-config/cli/ignore-leading-whitespace (boolean) > If 'false' then the CLI will show completion help when the user enters @@ -2151,6 +2142,11 @@ how they relate to each other. > 'range' keyword is not allowed in C- and I-style for range > expressions. +/ncs-config/cli/use-comma-as-range-key-delim (boolean) \[false\] +> If 'true' then comma in range expressions will be only be interpreted +> as a delimiter between keys as key1,key2 or key1-2,key4 and not as +> part of a range with an integer part as key1-2,4. + /ncs-config/cli/commit-message-format (string) \[ System message at \$(time)... Commit performed by \$(user) via \$(proto) using \$(ctx). \] > The format of the CLI commit messages @@ -2321,7 +2317,6 @@ how they relate to each other. > The headers will be part of all HTTP responses. /ncs-config/restconf/custom-headers/header/name (string) -> /ncs-config/restconf/custom-headers/header/value (string) > This parameter is mandatory. @@ -2447,10 +2442,8 @@ how they relate to each other. > listen on the port for all IPv4 or IPv6 addresses on the machine. /ncs-config/restconf/transport/tcp/extra-listen/ip (ipv4-address \| ipv6-address) -> /ncs-config/restconf/transport/tcp/extra-listen/port (port-number) -> /ncs-config/restconf/transport/tcp/ha-primary-listen > When /ncs-config/ha/enable or /ncs-config/ha-raft/enable is set to @@ -2461,10 +2454,8 @@ how they relate to each other. > terminate any ongoing traffic. /ncs-config/restconf/transport/tcp/ha-primary-listen/ip (ipv4-address \| ipv6-address) -> /ncs-config/restconf/transport/tcp/ha-primary-listen/port (port-number) -> /ncs-config/restconf/transport/tcp/dscp (dscp-type) > Support for setting the Differentiated Services Code Point (6 bits) @@ -2493,10 +2484,8 @@ how they relate to each other. > addresses on the machine. /ncs-config/restconf/transport/ssl/extra-listen/ip (ipv4-address \| ipv6-address) -> /ncs-config/restconf/transport/ssl/extra-listen/port (port-number) -> /ncs-config/restconf/transport/ssl/ha-primary-listen > When /ncs-config/ha/enable or /ncs-config/ha-raft/enable is set to @@ -2507,10 +2496,8 @@ how they relate to each other. > terminate any ongoing traffic. /ncs-config/restconf/transport/ssl/ha-primary-listen/ip (ipv4-address \| ipv6-address) -> /ncs-config/restconf/transport/ssl/ha-primary-listen/port (port-number) -> /ncs-config/restconf/transport/ssl/dscp (dscp-type) > Support for setting the Differentiated Services Code Point (6 bits) @@ -2667,6 +2654,12 @@ how they relate to each other. > enabled is either 'true' or 'false'. If 'true', the Web server is > started. +/ncs-config/webui/max-connections (uint64) \[1024\] +> The number of concurrent connections allowed to the web server. Note +> that due to how the server handles new connections, the number may +> temporarily be higher than the set number, but the actual connections +> will never be higher than the set number. + /ncs-config/webui/server-name (string) \[localhost\] > The hostname the Web server serves. @@ -2729,7 +2722,6 @@ how they relate to each other. > The headers will be part of all HTTP responses. /ncs-config/webui/custom-headers/header/name (string) -> /ncs-config/webui/custom-headers/header/value (string) > This parameter is mandatory. @@ -2887,10 +2879,8 @@ how they relate to each other. > the port for all IPv4 or IPv6 addresses on the machine. /ncs-config/webui/transport/tcp/extra-listen/ip (ipv4-address \| ipv6-address) -> /ncs-config/webui/transport/tcp/extra-listen/port (port-number) -> /ncs-config/webui/transport/tcp/ha-primary-listen > When /ncs-config/ha/enable or /ncs-config/ha-raft/enable is set to @@ -2901,10 +2891,8 @@ how they relate to each other. > terminate any ongoing traffic. /ncs-config/webui/transport/tcp/ha-primary-listen/ip (ipv4-address \| ipv6-address) -> /ncs-config/webui/transport/tcp/ha-primary-listen/port (port-number) -> /ncs-config/webui/transport/ssl > Settings deciding how the Web server SSL (Secure Sockets Layer) @@ -2956,10 +2944,8 @@ how they relate to each other. > on the machine. /ncs-config/webui/transport/ssl/extra-listen/ip (ipv4-address \| ipv6-address) -> /ncs-config/webui/transport/ssl/extra-listen/port (port-number) -> /ncs-config/webui/transport/ssl/ha-primary-listen > When /ncs-config/ha/enable or /ncs-config/ha-raft/enable is set to @@ -2970,10 +2956,8 @@ how they relate to each other. > terminate any ongoing traffic. /ncs-config/webui/transport/ssl/ha-primary-listen/ip (ipv4-address \| ipv6-address) -> /ncs-config/webui/transport/ssl/ha-primary-listen/port (port-number) -> /ncs-config/webui/transport/ssl/read-from-db (boolean) \[false\] > If enabled, TLS data (certificate, private key, and CA certificates) @@ -3322,10 +3306,8 @@ how they relate to each other. > on the port for all IPv4 or IPv6 addresses on the machine. /ncs-config/netconf-north-bound/transport/ssh/extra-listen/ip (ipv4-address \| ipv6-address) -> /ncs-config/netconf-north-bound/transport/ssh/extra-listen/port (port-number) -> /ncs-config/netconf-north-bound/transport/ssh/ha-primary-listen > When /ncs-config/ha/enable or /ncs-config/ha-raft/enable is set to @@ -3336,10 +3318,8 @@ how they relate to each other. > terminate any ongoing traffic. /ncs-config/netconf-north-bound/transport/ssh/ha-primary-listen/ip (ipv4-address \| ipv6-address) -> /ncs-config/netconf-north-bound/transport/ssh/ha-primary-listen/port (port-number) -> /ncs-config/netconf-north-bound/transport/tcp > NETCONF over TCP is not standardized, but it can be useful during @@ -3373,10 +3353,8 @@ how they relate to each other. > on the port for all IPv4 or IPv6 addresses on the machine. /ncs-config/netconf-north-bound/transport/tcp/extra-listen/ip (ipv4-address \| ipv6-address) -> /ncs-config/netconf-north-bound/transport/tcp/extra-listen/port (port-number) -> /ncs-config/netconf-north-bound/transport/tcp/ha-primary-listen > When /ncs-config/ha/enable or /ncs-config/ha-raft/enable is set to @@ -3387,10 +3365,8 @@ how they relate to each other. > terminate any ongoing traffic. /ncs-config/netconf-north-bound/transport/tcp/ha-primary-listen/ip (ipv4-address \| ipv6-address) -> /ncs-config/netconf-north-bound/transport/tcp/ha-primary-listen/port (port-number) -> /ncs-config/netconf-north-bound/extended-sessions (boolean) \[false\] > If extended-sessions are enabled, all NCS sessions can be terminated @@ -3552,10 +3528,8 @@ how they relate to each other. > listen on the port for all IPv4 or IPv6 addresses on the machine. /ncs-config/netconf-call-home/transport/tcp/extra-listen/ip (ipv4-address \| ipv6-address) -> /ncs-config/netconf-call-home/transport/tcp/extra-listen/port (port-number) -> /ncs-config/netconf-call-home/transport/tcp/dscp (dscp-type) > Support for setting the Differentiated Services Code Point (6 bits) @@ -3706,10 +3680,8 @@ how they relate to each other. > to listen on the port for all IPv4 or IPv6 addresses on the machine. /ncs-config/ha/extra-listen/ip (ipv4-address \| ipv6-address) -> /ncs-config/ha/extra-listen/port (port-number) -> /ncs-config/ha/tick-timeout (xs:duration) \[PT20S\] > Defines the timeout between keepalive ticks sent between HA nodes. The @@ -3767,7 +3739,6 @@ how they relate to each other. > Only applicable if auto-start is 'true'. /ncs-config/java-vm/run-in-terminal/enabled (boolean) \[false\] -> /ncs-config/java-vm/run-in-terminal/terminal-command (string) \[xterm -title ncs-java-vm -e\] > The command which NCS will run to start the terminal, or the string @@ -3792,10 +3763,8 @@ how they relate to each other. > redeployed. /ncs-config/java-vm/restart-on-error/count (uint16) \[3\] -> /ncs-config/java-vm/restart-on-error/duration (xs:duration) \[PT60S\] -> /ncs-config/python-vm > Configuration parameters to control how and if NCS shall start (and @@ -3818,7 +3787,6 @@ how they relate to each other. > is equivalent to leaving this parameter unset. /ncs-config/python-vm/run-in-terminal/enabled (boolean) \[false\] -> /ncs-config/python-vm/run-in-terminal/terminal-command (string) \[xterm -title ncs-python-vm -e\] > The command which NCS will run to start the terminal, or the string diff --git a/resources/man/ncs_load.1.md b/resources/man/ncs_load.1.md index aceae627..be574cd4 100644 --- a/resources/man/ncs_load.1.md +++ b/resources/man/ncs_load.1.md @@ -80,11 +80,14 @@ success and non-zero otherwise. > the `system` context, which implies that AAA rules will *not* be > applied at all. > -> > [!NOTE] -> > If the environment variables `NCS_MAAPI_USID` and -> > `NCS_MAAPI_THANDLE` are set (see the ENVIRONMENT section), or if the -> > `-i` option is used, these options are silently ignored, since -> > `ncs_load` will attach to an existing transaction. +>
+> +> If the environment variables `NCS_MAAPI_USID` and `NCS_MAAPI_THANDLE` +> are set (see the ENVIRONMENT section), or if the `-i` option is used, +> these options are silently ignored, since `ncs_load` will attach to an +> existing transaction. +> +>
`-i` > Instead of starting a new user session and transaction, `ncs_load` diff --git a/resources/man/ncsc.1.md b/resources/man/ncsc.1.md index aee52501..1bd9546f 100644 --- a/resources/man/ncsc.1.md +++ b/resources/man/ncsc.1.md @@ -813,7 +813,6 @@ exceptions: ## See Also The NCS User Guide -> `ncs(1)` > command to start and control the NCS daemon diff --git a/resources/man/tailf_yang_cli_extensions.5.md b/resources/man/tailf_yang_cli_extensions.5.md index c937ef87..c8789568 100644 --- a/resources/man/tailf_yang_cli_extensions.5.md +++ b/resources/man/tailf_yang_cli_extensions.5.md @@ -645,8 +645,8 @@ The *cli-delayed-auto-commit* statement can be used in: *container*, ### tailf:cli-delete-container-on-delete -Specifies that the parent container should be deleted when . this leaf -is deleted. +Specifies that the parent container should be deleted when this leaf is +deleted. The *cli-delete-container-on-delete* statement can be used in: *leaf* and *refine*. @@ -1863,10 +1863,13 @@ The *cli-replace-all* statement can be used in: *leaf-list*, ### tailf:cli-reset-container -Specifies that all sibling leaves in the container should be reset when -this element is set. +Specifies that all sibling leafs in the container should be removed when +this element is set. If setting multiple leafs in a single command, only +the remaining sibling leafs are removed. -When used on a container its content is cleared when set. +When this extension is used on a container, its child leafs will inherit +the extension. Additionally, performing set on the container will clear +all of its contents. The *cli-reset-container* statement can be used in: *leaf*, *list*, *container*, and *refine*. @@ -2827,7 +2830,6 @@ For example: ## See Also The User Guide -> `ncsc(1)` > NCS Yang compiler diff --git a/resources/man/tailf_yang_extensions.5.md b/resources/man/tailf_yang_extensions.5.md index 5bddd7f8..dc70c884 100644 --- a/resources/man/tailf_yang_extensions.5.md +++ b/resources/man/tailf_yang_extensions.5.md @@ -2338,7 +2338,6 @@ This section describes XPath functions that can be used for example in > Tail-f YANG CLI extensions The NSO User Guide -> `confdc(1)` > Confdc compiler diff --git a/whats-new.md b/whats-new.md index fdf664ed..5afe09eb 100644 --- a/whats-new.md +++ b/whats-new.md @@ -40,13 +40,13 @@ Documentation Updates:
- Filtering JSON-RPC show_config method +Filtering JSON-RPC show_config method The `show_config` JSON-RPC method now supports filtering and pagination options for improved user experience when retrieving large list instances. Documentation Updates: -* Added filtering and pagination parameters to `show_config` documentation in [JSON-RPC API Data](development/advanced-development/web-ui-development/json-rpc-api.md#data). +* Added filtering and pagination parameters to `show_config` documentation in [JSON-RPC API Data](development/advanced-development/web-ui-development/json-rpc-api.md#data).
@@ -71,7 +71,7 @@ This NSO version introduces multiple quality of life improvements for service de * NSO warns if there are unused macros inside XML templates. - New MAAPI call (`get_template_variables` / `ncsGetTemplateVariables`) enumerates variables in device, service, or compliance template. -- New MAAPI call (`get_trans_mode` / `getTransactionMode`) returns mode of the transaction, allowing, for example, easier reuse of existing transaction in an action. +- New MAAPI call (`get_trans_mode` / `getTransactionMode`) returns mode of the transaction, allowing, for example, easier reuse of existing transaction in an action. - Similar to Python API, Java API action callback now always provides an open transaction. If there is no existing transaction, a new read-only transaction is started automatically. - Data kickers can now kick for the same transaction where they are defined when configured with a new `kick-on-creation` leaf. @@ -83,9 +83,9 @@ This NSO version introduces multiple quality of life improvements for service de The NSO Web Server now has a configurable number of simultaneous connections. Additionally, the number of current connections can be monitored through the metrics framework. - Documentation Updates: +Documentation Updates: -* Documented a new `/ncs-config/webui/max-connections` parameter for the `ncs.conf` file. +* Documented a new `/ncs-config/webui/max-connections` parameter for the [ncs.conf](resources/man/ncs.conf.5.md) file. @@ -93,7 +93,7 @@ The NSO Web Server now has a configurable number of simultaneous connections. Ad Updated Example NEDs -Network Element Drivers (NEDs) used throughout the [NSO examples](https://github.com/NSO-developer/nso-examples) have been updated to include recent versions of the device models. The new models more closely resemble those in production NEDs, which makes examples more realistic and supports additional real-world scenarios. +Network Element Drivers (NEDs) used throughout the [NSO examples](https://github.com/NSO-developer/nso-examples/tree/6.6) have been updated to include recent versions of the device models. The new models more closely resemble those in production NEDs, which makes examples more realistic and supports additional real-world scenarios. Note that these NEDs are still example NEDs and are not designed for production use.