In Lab 1: Create NSO Docker Environments, participants typically embark on the initial steps of setting up Network Services Orchestrator (NSO) environments using Docker containers. This lab involves tasks such as configuring Docker containers, defining the NSO image, and establishing the necessary network connectivity. Participants may also explore Docker Compose for orchestrating multi-container NSO deployments. The lab aims to provide hands-on experience in creating portable and scalable NSO environments within Docker, enabling participants to deploy NSO instances efficiently in various environments. Successful completion of Cisco NSO300 Lab 1 equips participants with foundational skills in leveraging containerization technologies for deploying NSO, a crucial aspect of modern network automation and orchestration practices.
In this task, you will set up an nso-docker project, then create and build development and testing NSO Docker images that will then be used as a base for your NSO deployment.
Note: The goal of this activity is to introduce Docker and to produce a working NSO Docker environment. All the subsequent labs are based on an environment produced by this process.
Step 1: Connect to the VM server by clicking the NSO icon.
Step 2: Open the terminal window using the Terminal icon on the taskbar.
Step 3: Make sure that Docker is installed on the system, using the docker -v command. The command should display your Docker version.
rst@rst:~$ **docker -v**
Docker version 20.10.7, build f0df350
rst@rst:~$
Step 4: List the contents of your home directory with the ls command. Make sure that both the NSO installer and the nso-docker project are present.
Note: The nso-docker project is already included in this lab, but can otherwise be found at https://github.com/NSO-developer/nso-docker. It is a repository that is run by Cisco and contains the files that are required to set up an NSO project in Docker. It also contains several project skeletons that can be used for different types of NSO projectssystem, NED, and package.
rst@rst:~$ **ls**
rst@rst:~$ ls
Desktop Documents Downloads Music NSO NSO\_Files Pictures Public snap Templates Videos
rst@rst:~$
Step:5: Unpack the NSO installer. This installer will be used to install NSO on Docker images.
rst@rst:~$ ls
rst@rst:~/NSO\_Files$ **chmod +x ./nso-5.3.linux.x86\_64.signed.bin**
rst@rst:~/NSO\_Files$ **./nso-5.3.linux.x86\_64.signed.bin**
Unpacking...
Verifying signature...
Downloading CA certificate from http://www.cisco.com/security/pki/certs/crcam2.cer ...
Successfully downloaded and verified crcam2.cer.
Downloading SubCA certificate from http://www.cisco.com/security/pki/certs/innerspace.cer ...
Successfully downloaded and verified innerspace.cer.
Successfully verified root, subca and end-entity certificate chain.
Successfully fetched a public key from tailf.cer.
Successfully verified the signature of nso-5.3.linux.x86\_64.installer.bin using tailf.cer
rst@rst:~/NSO\_Files$
Step 6: Copy the NSO installer into your nso-docker repository
rst@rst:~/NSO\_Files$ **cp nso-5.3.linux.x86\_64.installer.bin ../NSO/nso-docker/nso-install-files/**
**Step 7:**Set the environmental variables. Set the DOCKER_REGISTRY, IMAGE_PATH and NSO_IMAGE_PATH to “nso300.gitlab.local/” and the NSO_VERSION to “5.3”. These variables are used to build and tag the NSO docker images. Usually, they are set within project-specific Makefiles, because multiple versions and locations of images can be used on a single machine.
**Note:**This lab activity does not include an actual remote Docker image registry. Instead, you need to make sure that all the images needed are built locally and correctly tagged so that the build process simulates an actual development or production environment.
rst@rst:~$ **export DOCKER\_REGISTRY=nso300.gitlab.local/**
rst@rst:~$ **export IMAGE\_PATH=nso300.gitlab.local/**
rst@rst:~$ **export NSO\_IMAGE\_PATH=nso300.gitlab.local/**
rst@rst:~$ **export NSO\_VERSION=5.3**
Step 8: Enter the nso-docker directory and build the two NSO images using the make command. This command takes some time to complete. The make command executes a set of commands listed in the Makefile, that builds the base NSO image. The image should be built successfully.
rst@rst:~/NSO\_FileS$ **cd ../NSO/nso-docker/**
rst@rst:~/NSO/nso-docker$ **make**
The default make target will build Docker images out of all the NSO
versions found in nso-install-files/. To also run the test
suite for the built images, run 'make test-all'
make build-all
make\[1\]: Entering directory '/home/rst/NSO/nso-docker'
### OUTPUT OMITTED ###
Successfully built c2e80d8aa8e6
Successfully tagged nso300.gitlab.local/cisco-nso-base:5.3-rst
### OUTPUT OMITTED ###
Successfully built 037b51416caa
Successfully tagged nso300.gitlab.local/cisco-nso-dev:5.3-rst
rm -f \*.bin
make\[3\]: Leaving directory '/home/rst/NSO/nso-docker/docker-images'
make\[2\]: Leaving directory '/home/rst/NSO/nso-docker'
make\[1\]: Leaving directory '/home/rst/NSO/nso-docker'
Step 9: Verify that your local Docker registry now contains a base and a dev image by using the docker images
command. These images can be used as a base for NSO development and/or production.
rst@rst:~/NSO/nso-docker$ **docker images**
REPOSITORY TAG IMAGE ID CREATED SIZE
nso300.gitlab.local/cisco-nso-base 5.3 c2e80d8aa8e6 2 minutes ago 604MB
<span class="hly">nso300.gitlab.local/cisco-nso-base 5.3-rst c2e80d8aa8e6 2 minutes ago 604MB</span>
nso300.gitlab.local/cisco-nso-dev 5.3 037b51416caa 2 minutes ago 1.38GB
<span class="hly">nso300.gitlab.local/cisco-nso-dev 5.3-rst 037b51416caa 2 minutes ago 1.38GB</span>
<none> <none> a805fd8f7577 4 minutes ago 732MB
debian buster 0980b84bde89 6 days ago 114MB
rst@rst:~/NSO/nso-docker$
Step 10: Re-tag the images using the make tag-release
command. This action prepares the image for general use, not just for the current user.
rst@rst:~/NSO/nso-docker$ **make tag-release**
docker tag nso300.gitlab.local/cisco-nso-dev:5.3-rst nso300.gitlab.local/cisco-nso-dev:5.3
docker tag nso300.gitlab.local/cisco-nso-base:5.3-rst nso300.gitlab.local/cisco-nso-base:5.3
Step 11: Enter the nso300 folder in the home directory. This directory is used as the home directory for your NSO Docker project. Your output should resemble this example:
rst@rst:~/NSO/nso-docker$ **cd ../nso300/**
rst@rst:~/NSO/nso300$
Step 12: Copy the contents of the nso-system project skeleton from the nso-docker project into your nso300 directory.
rst@rst:~/NSO/nso300$ **cp -r ../nso-docker/skeletons/system/\* .**
rst@rst:~/NSO/nso300$ **ls**
Dockerfile.in extra-files includes Makefile nid nidcommon.mk nidsystem.mk packages README.nid-system.org test-packages
rst@rst:~/NSO/nso300$
Step 13: Open the Makefile. The Makefile from this project skeleton is used to set up, start, and test your project-specific NSO Docker environment.
Now it contains nothing relevant. Most of the complexity is hidden in the pre-built nidsystem.mk library.
rst@rst:~/NSO/nso300$ **code Makefile**
rst@rst:~/NSO/nso300$
The following output shows how the file should appear when you open it for the first time
\# You can set the default NSO\_IMAGE\_PATH & PKG\_PATH to point to your docker
# registry so that developers don't have to manually set these variables.
# Similarly for NSO\_VERSION you can set a default version. Note how the ?=
# operator only sets these variables if not already set, thus you can easily
# override them by explicitly setting them in your environment and they will be
# overridden by variables in CI.
# TODO: uncomment and fill in values for your environment
# Default variables:
#export NSO\_IMAGE\_PATH ?= registry.example.com:5000/my-group/nso-docker/
#export PKG\_PATH ?= registry.example.com:5000/my-group/
#export NSO\_VERSION ?= 5.4
# Include standard NID (NSO in Docker) system Makefile that defines all standard
# make targets
include nidsystem.mk
# The rest of this file is specific to this repository.
# For development purposes it is useful to be able to start a testenv once and
# then run the tests, defined in testenv-test, multiple times, adjusting the
# code in between each run. That is what a normal development cycle looks like.
# There is usually some form of initial configuration that we want to apply
# once, after the containers have started up, but avoid applying it for each
# invocation of testenv-test. Such configuration can be placed at the end of
# testenv-start-extra. You can also start extra containers with
# testenv-start-extra, for example netsims or virtual routers.
# TODO: you should modify the make targets below for your package
# TODO: clean up your Makefile by removing comments explaining how to do things
# Start extra containers or place things you want to run once, after startup of
# the containers, in testenv-start-extra.
testenv-start-extra:
@echo "\\n== Starting repository specific testenv"
# Start extra things, for example a netsim container by doing:
# docker run -td --name $(CNT\_PREFIX)-my-netsim --network-alias mynetsim1 $(DOCKER\_ARGS) $(IMAGE\_PATH)my-ned-repo/netsim:$(DOCKER\_TAG)
# Use --network-alias to give it a name that will be resolvable from NSO and
# other containers in our testenv network, i.e. in NSO, the above netsim should
# be configured with the address 'mynetsim1'.
# Make sure to include $(DOCKER\_ARGS) as it sets the right docker network and
# label which other targets, such as testenv-stop, operates on. If you start an
# extra NSO container, use $(DOCKER\_NSO\_ARGS) and give a unique name but
# starting with '-nso', like so:
# docker run -td --name $(CNT\_PREFIX)-nsofoo --network-alias nsofoo $(DOCKER\_NSO\_ARGS) $(IMAGE\_PATH)$(PROJECT\_NAME)/nso:$(DOCKER\_TAG)
#
# Add things to be run after startup is complete. If you want to configure NSO,
# be sure to wait for it to start, using e.g.:
#docker exec -t $(CNT\_PREFIX)-nso bash -lc 'ncs --wait-started 600'
#
# For example, to load an XML configuration file:
# docker cp test/initial-config.xml $(CNT\_PREFIX)-nso:/tmp/initial-config.xml
# $(MAKE) testenv-runcmdJ CMD="configure\\n load merge /tmp/initial-config.xml\\n commit"
# Place your tests in testenv-test. Feel free to define a target per test case
# and call them from testenv-test in case you have more than a handful of cases.
# Sometimes when there is a "setup" or "preparation" part of a test, it can be
# useful to separate into its own target as to make it possible to run that
# prepare phase and then manually inspect the state of the system. You can
# achieve this by further refining the make targets you have.
testenv-test:
@echo "\\n== Running tests"
@echo "-- Verify packages are operationally up"
$(MAKE) testenv-runcmdJ CMD="show packages" | docker run -i --rm $(NSO\_IMAGE\_PATH)cisco-nso-dev:$(NSO\_VERSION) bash -c '! grep -P "oper-status (?!up)" >/dev/null' || (echo "ERROR: packages not operationally up:" && $(MAKE) testenv-runcmdJ CMD="show packages" && false)
@echo "TODO: Fill in your tests here"
# Some examples for how to run commands in the ncs\_cli:
# $(MAKE) testenv-runcmdJ CMD="show packages"
# $(MAKE) testenv-runcmdJ CMD="request packages reload"
# Multiple commands in a single session also works - great for configuring stuff:
# $(MAKE) testenv-runcmdJ CMD="configure\\n set foo bar\\n commit"
# We can test for certain output by combining show commands in the CLI with for
# example grep:
# $(MAKE) testenv-runcmdJ CMD="show configuration foo" | grep bar
Step 14: Locate the environmental variables section and replace them with the variables that are required for this project. Set the NSO_VERSION to the NSO version that you are using (5.3) and set the NSO_IMAGE_PATH and IMAGE_PATH to that of your local Docker image registry. To make the file easier to work with, remove all the comments.
export NSO\_VERSION=5.3
export NSO\_IMAGE\_PATH=nso300.gitlab.local/
export IMAGE\_PATH=nso300.gitlab.local/
include nidsystem.mk
testenv-start-extra:
@echo "\\n== Starting repository specific testenv"
testenv-test:
@echo "\\n== Running tests"
@echo "TODO: Fill in your tests here"
Step 15: Save the file and exit the file editor.
Step 16: Build the NSO project image using the make build
command.
rst@rst:~/NSO/nso300$ **make build**
Checking NSO in Docker images are available...
INFO: nso300.gitlab.local/cisco-nso-base:5.3 exists, attempting pull of latest version
Error response from daemon: Get https://nso300.gitlab.local/v2/: dial tcp: lookup nso300.gitlab.local: no such host
Error response from daemon: Get https://nso300.gitlab.local/v2/: dial tcp: lookup nso300.gitlab.local: no such host
-- Generating Dockerfile
cp Dockerfile.in Dockerfile
for DEP\_NAME in $(ls includes/); do export DEP\_URL=$(awk '{ print "echo", $0 }' includes/${DEP\_NAME} | /bin/sh -); awk "/DEP\_END/ { print \\"FROM ${DEP\_URL} AS ${DEP\_NAME}\\" }; /DEP\_INC\_END/ { print \\"COPY --from=${DEP\_NAME} /var/opt/ncs/packages/ /includes/\\" }; 1" Dockerfile > Dockerfile.tmp; mv Dockerfile.tmp Dockerfile; done
docker build --target build -t nso300.gitlab.local/nso300/build:5.3-rst --build-arg NSO\_IMAGE\_PATH=nso300.gitlab.local/ --build-arg NSO\_VERSION=5.3 --build-arg PKG\_FILE=nso300.gitlab.local/nso300/package:5.3-rst .
Sending build context to Docker daemon 69.63kB
Step 1/8 : ARG NSO\_IMAGE\_PATH
Step 2/8 : ARG NSO\_VERSION
Step 3/8 : FROM ${NSO\_IMAGE\_PATH}cisco-nso-dev:${NSO\_VERSION} AS build
### OUTPUT OMITTED ###
Successfully built d21c0c887624
Successfully tagged nso300.gitlab.local/nso300/nso:5.3-rst
rst@rst:~/NSO/nso300$
Step 17: Start the NSO Docker environment using the make testenv-start
command. This command starts the NSO System container. This container includes the NSO installation and is based on the base (production) NSO image.
rst@rst:~/NSO/nso300$ **make testenv-start**
docker network inspect testenv-nso300-5.3-rst >/dev/null 2>&1 || docker network create testenv-nso300-5.3-rst
d1c59fadd2f9c3aa35155d37f51c5c55ec362e70e22f77eaf17fec6c245996f4
docker run -td --name testenv-nso300-5.3-rst-nso --network-alias nso --network testenv-nso300-5.3-rst --label com.cisco.nso.testenv.name=testenv-nso300-5.3-rst --label com.cisco.nso.testenv.type=nso --volume /var/opt/ncs/packages -e DEBUGPY= --expose 5678 --publish-all -e ADMIN\_PASSWORD=NsoDocker1337 ${NSO\_EXTRA\_ARGS} nso300.gitlab.local/nso300/nso:5.3-rst
856dd5e1f8887ffd9f6d38553932b2f8a6d15dedb0803a1db526d70d370b65e6
make testenv-start-extra
make\[1\]: Entering directory '/home/rst/NSO/nso300'
== Starting repository specific testenv
make\[1\]: Leaving directory '/home/rst/NSO/nso300'
make testenv-wait-started-nso
make\[1\]: Entering directory '/home/rst/NSO/nso300'
NSO instance testenv-nso300-5.3-rst-nso has started
All NSO instance have started
make\[1\]: Leaving directory '/home/rst/NSO/nso300'
rst@rst:~/NSO/nso300$
Step 18: Verify that the NSO System container is running. Use the command docker ps -a
command.
rst@rst:~/NSO/nso300$ **docker ps -a**
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
856dd5e1f888 nso300.gitlab.local/nso300/nso:5.3-rst "/run-nso.sh" 29 seconds ago Up 28 seconds (healthy) 0.0.0.0:49158->22/tcp, :::49158->22/tcp, 0.0.0.0:49157->80/tcp, :::49157->80/tcp, 0.0.0.0:49156->443/tcp, :::49156->443/tcp, 0.0.0.0:49155->830/tcp, :::49155->830/tcp, 0.0.0.0:49154->4334/tcp, :::49154->4334/tcp, 0.0.0.0:49153->5678/tcp, :::49153->5678/tcp testenv-nso300-5.3-rst-nso
rst@rst:~/NSO/nso300$
Step 19: Enter the NSO CLI using the make testenv-cli
command. This command takes you into the test NSO System container and executes the ncs_cli
command, which takes you directly to the NSO CLI. Note: To enter the NSO System container Linux shell instead of NSO CLI, use the make testenv-shell
command.
rst@rst:~/NSO/nso300$ **make testenv-cli**
docker exec -it testenv-nso300-5.3-rst-nso bash -lc 'ncs\_cli -u admin'
admin connected from 127.0.0.1 using console on 856dd5e1f888
admin@ncs>
Step 20: Exit the NSO CLI. This action also exits the Docker container.
admin@ncs> **exit**
rst@rst:~/NSO/nso300$
In this task, you will create NSO system image dependencies such as NEDs and set up netsim devices for your NSO environment inside Docker containers. Using dependencies allows you to quickly and consistently set up and edit your NSO environment.
Step 1: Enter the NSO System container again, switch CLI mode and list the devices and packages using show commands. No devices are currently added to NSO and no packages are loaded.
rst@rst:~/NSO/nso300$ **make testenv-cli**
docker exec -it testenv-nso300-5.3-rst-nso bash -lc 'ncs\_cli -u admin'
admin connected from 127.0.0.1 using console on 856dd5e1f888
admin@ncs> switch cli
admin@ncs# show devices brief
NAME ADDRESS DESCRIPTION NED ID
----------------------------------
admin@ncs# show packages
% No entries found.
admin@ncs#
Step 2: Exit the NSO Docker container.
admin@ncs# exit
rst@rst:~/NSO/nso300$
Step 3: Create a neds folder in home directory and navigate there.
rst@rst:~/NSO$ **mkdir neds**
rst@rst:~/NSO/nso300$ **cd neds/**
rst@rst:~/NSO/neds$
Step 4: Create four directories there, one for each type of NED you will use in this lab ned-ios, ned-iosxr, ned-nx, and ned-asa.
rst@rst:~/NSO/neds$ **mkdir ned-ios**
rst@rst:~/NSO/neds$ **mkdir ned-iosxr**
rst@rst:~/NSO/neds$ **mkdir ned-nx**
rst@rst:~/NSO/neds$ **mkdir ned-asa**
rst@rst:~/NSO/neds$
Step 5: Copy the NED project skeleton contents from the nso-docker repository to each one of the directories.
Note: Instead of creating a netsim lab locally with the command ncs-netsim, you build the images using a NED project skeleton, include them as dependencies to NSO Docker images and start them within your test environment.
rst@rst:~/NSO/neds$ **cp -r ../nso-docker/skeletons/ned/\* ned-ios/**
rst@rst:~/NSO/neds$ **cp -r ../nso-docker/skeletons/ned/\* ned-iosxr/**
rst@rst:~/NSO/neds$ **cp -r ../nso-docker/skeletons/ned/\* ned-nx/**
rst@rst:~/NSO/neds$ **cp -r ../nso-docker/skeletons/ned/\* ned-asa/**
rst@rst:~/NSO/neds$
Step 6: Copy the NEDs from ~/NSO_Files/neds into the packages subdirectory of their respective NED folder.
rst@rst:~/NSO/neds$ **cp -r ~/NSO\_Files/neds/cisco-ios-cli-6.42 ned-ios/packages/**
rst@rst:~/NSO/neds$ **cp -r ~/NSO\_Files/neds/cisco-iosxr-cli-7.18 ned-iosxr/packages/**
rst@rst:~/NSO/neds$ **cp -r ~/NSO\_Files/neds/cisco-nx-cli-5.13 ned-nx/packages/**
rst@rst:~/NSO/neds$ **cp -r ~/NSO\_Files/neds/cisco-asa-cli-6.7 ned-asa/packages/**
rst@rst:~/NSO/neds$
Step 7: Enter the ios NED directory and build the image using the make build command. Tag the image for release with the make tag-release command. The make build command should produce 3 imagesnetsim, testnso, and package. The netsim image is used to host a netsim device, the testnso image is used in automated testing of the NED, and the package image contains the compiled NED package.
rst@rst:~/NSO/neds$ **cd ned-ios**
rst@rst:~/neds/ned-ios$ **make build**
Checking NSO in Docker images are available...
INFO: nso300.gitlab.local/cisco-nso-base:5.3 exists, attempting pull of latest version
### OUTPUT OMITTED ###
Successfully built bf2b36c7055f
Successfully tagged nso300.gitlab.local/ned-ios/netsim:5.3-rst
### OUTPUT OMITTED ###
Successfully built c8f50e20c4e0
Successfully tagged nso300.gitlab.local/ned-ios/testnso:5.3-rst
### OUTPUT OMITTED ###
Successfully built 6527679f8a7a
Successfully tagged nso300.gitlab.local/ned-ios/package:5.3-rst
rst@rst:~/NSO/neds/ned-ios$ **make tag-release**
docker tag nso300.gitlab.local/ned-ios/package:5.3-rst nso300.gitlab.local/ned-ios/package:5.3
docker tag nso300.gitlab.local/ned-ios/netsim:5.3-rst nso300.gitlab.local/ned-ios/netsim:5.3
rst@rst:~/NSO/neds/ned-ios$
Step 8: Repeat the previous step for iosxr, nx, and asa directories. Build and tag the images using the make build and make tag-release commands.
rst@rst:~/neds/ned-ios$ **cd ../ned-iosxr**
rst@rst:~/neds/ned-iosxr$ **make build**
### OUTPUT OMITTED ###
rst@rst:~/neds/ned-iosxr$ **make tag-release**
**\### Do for all other NEDS ###**
Step 9: Clean up the intermediate Docker images with the docker system prune -f
command. This command removes unused data in your Docker environment, that is, unused containers and networks, dangling and unreferenced images and volumes.
rst@rst:~/NSO/neds/ned-asa$ **docker system prune -f**
Deleted Images:
deleted: sha256:e16c352f870b757b4eb13df058a8d9486ecec7abe4883c32aeeb1969de8a5223
deleted: sha256:d0b4dc6a92e28af1fadf8eff1ad389dd922182c321e2d4b5c4ec7d8322b2cb91
deleted: sha256:f04c1c0ffdc76bac28e7960c087447a4984e82d10bcc6d5c8ca94adac2b684f4
deleted: sha256:c147ada0f6cb89811f152d4af3a574ce91d716cc96436de1606f351c6e5d961f
deleted: sha256:ad81950cff7ecd77ff010ebf719801a43538a6dd60c4aee24ff029a6fa043a1f
deleted: sha256:615d64fe1a6d7fef7cf9533daceee42461899ee8085b8208345f20927eacb369
deleted: sha256:bb77f882c0a40a3387d5610b92b7e3f198786325a611127b5ade24ac8325c777
deleted: sha256:1a869803c9196049755269225644b68068aeb1caac7c2e4a1ed27dc12927e0ed
deleted: sha256:359fae3423a0d223c7a01cd52d3a7736ce1f9132f3bec087014c5f98234f9a94
deleted: sha256:8c04f81de68a862297a477e84487a93c4e90d8604e7f12f649810561fec8a288
deleted: sha256:a805fd8f757711ec9f7e8a148f8239a51aa467f95406f3ade2dbdf683a053141
deleted: sha256:9f2b93d2dbf017c25fa6d8e0f91f7adc113d422507e34742474d96793eba33fe
deleted: sha256:5cdf522f7d145f50aac68d99757b3481b3af9cfcf59acde12a0ee0e8c5fc78cb
deleted: sha256:9a67dfbbac9f19c7d5dd932a44141a61e1cc81ade75dd971fb80a697ff33b237
deleted: sha256:0cad90680bd1fccee6111663db9b0ce24d5181867c436750bac21f86ca5b3422
deleted: sha256:6d8c67623d59271fc4acd31ced2bd2eca08ffa83410a1d8e6c64a67f35fd76f7
deleted: sha256:2c586ffe48661599baf4e8b6d632b0f30a0f7ad20fa42e14cd48713c8fd77f68
deleted: sha256:43ddb97cc436e229b531df3866bd214179b0da62f882851e59d0f2a6a24bd8ab
deleted: sha256:1ddc5a2aef37174c67738cb824c387f4a385eaca425de8cfd446e69cb7db256e
deleted: sha256:cdf754dbceab349cb77ec74546e361b0e28da7235b53fcf229c7532c1a89710a
deleted: sha256:4268465e68a07c82448b0b2045672376c4fe99ebcf4938d628df7333aa5e63ee
deleted: sha256:cd3e67dfb0a8f7a8ceb6ca28441c80c6fc2327df8397a9e849fb4ca688c30062
deleted: sha256:88c0b78dfd381f37f040a498af5d79db181e48de73489149ee7704d2bbc24305
deleted: sha256:7315e47e789e38d8a65073dac2d1fd364dc67073c536012c3768aa9896142dc0
deleted: sha256:358ecc54961e3d47533a1b09b80d58ad6b6fb618774d142a275fcdb7fe931b78
Deleted build cache objects:
z42hecvhpv6u1y5z0mhfbop8p
7g0rhpcoegs5cg4s65oplcvtm
9001ubq8ahjqvbq9p94ldgs1i
paezg1u1tv2iczkhvw8wgn3v5
6wt4jt86tl0kfupgpv0v4icyp
9smkbk3qgk9dihiaix96yenm4
izm6u1kl10b1tsz5ava5x5fvj
dibdjfwkqxn1qywko057kpdy5
sfirjykkgdh2o03m6yd5sfgdb
oqmsde0wtcmz0c15li05kooqv
qpby0esflafbf2qbss4tbgot6
tk76jpgpve0u7cv8xo76vledx
i4gt0p4rve2vlytidfum3jdhe
c2b525y66cnjvr74zqum8c2ra
ra8gdueph7vs652hzg0wza4jc
Total reclaimed space: 2.045GB
rst@rst:~/NSO/neds/ned-asa$
Step 10: Verify that the netsim and package images with tag 5.3.2 exist for every NED by listing the Docker images with the docker images command. These are the eight Docker images that should be present for the setup of the dependencies:
rst@rst:~/NSO/neds/ned-asa$ **docker images**
REPOSITORY TAG IMAGE ID CREATED SIZE
nso300.gitlab.local/ned-asa/package 5.3 8bc3ef178e20 About a minute ago 16.8MB
nso300.gitlab.local/ned-asa/package 5.3-rst 8bc3ef178e20 About a minute ago 16.8MB
nso300.gitlab.local/ned-asa/testnso 5.3-rst c65ce8f12362 About a minute ago 621MB
nso300.gitlab.local/ned-asa/netsim 5.3 91f0c0944954 About a minute ago 1.4GB
nso300.gitlab.local/ned-asa/netsim 5.3-rst 91f0c0944954 About a minute ago 1.4GB
nso300.gitlab.local/ned-asa/build 5.3-rst bf9b29981c6c About a minute ago 1.4GB
nso300.gitlab.local/ned-nx/package 5.3 f2a0e7576959 3 minutes ago 18.2MB
nso300.gitlab.local/ned-nx/package 5.3-rst f2a0e7576959 3 minutes ago 18.2MB
nso300.gitlab.local/ned-nx/testnso 5.3-rst 421e9f221058 3 minutes ago 622MB
nso300.gitlab.local/ned-nx/netsim 5.3 739831ce558f 3 minutes ago 1.4GB
nso300.gitlab.local/ned-nx/netsim 5.3-rst 739831ce558f 3 minutes ago 1.4GB
nso300.gitlab.local/ned-nx/build 5.3-rst 7a5ccc31ad55 3 minutes ago 1.4GB
nso300.gitlab.local/ned-iosxr/package 5.3 dcb4df02d494 4 minutes ago 110MB
nso300.gitlab.local/ned-iosxr/package 5.3-rst dcb4df02d494 4 minutes ago 110MB
nso300.gitlab.local/ned-iosxr/testnso 5.3-rst cd54e2163636 4 minutes ago 714MB
nso300.gitlab.local/ned-iosxr/netsim 5.3 858e512667b8 4 minutes ago 1.49GB
nso300.gitlab.local/ned-iosxr/netsim 5.3-rst 858e512667b8 4 minutes ago 1.49GB
nso300.gitlab.local/ned-iosxr/build 5.3-rst a251f95c4eb4 4 minutes ago 1.53GB
nso300.gitlab.local/ned-ios/package 5.3 6527679f8a7a 7 minutes ago 152MB
nso300.gitlab.local/ned-ios/package 5.3-rst 6527679f8a7a 7 minutes ago 152MB
nso300.gitlab.local/ned-ios/testnso 5.3-rst c8f50e20c4e0 7 minutes ago 756MB
nso300.gitlab.local/ned-ios/netsim 5.3 bf2b36c7055f 7 minutes ago 1.53GB
nso300.gitlab.local/ned-ios/netsim 5.3-rst bf2b36c7055f 7 minutes ago 1.53GB
nso300.gitlab.local/ned-ios/build 5.3-rst e7157b149176 7 minutes ago 1.57GB
nso300.gitlab.local/nso300/nso 5.3-rst d21c0c887624 14 minutes ago 1.02GB
nso300.gitlab.local/nso300/build 5.3-rst 29e8491ed4f5 15 minutes ago 1.38GB
nso300.gitlab.local/cisco-nso-base 5.3 c2e80d8aa8e6 20 minutes ago 604MB
nso300.gitlab.local/cisco-nso-base 5.3-rst c2e80d8aa8e6 20 minutes ago 604MB
nso300.gitlab.local/cisco-nso-dev 5.3 037b51416caa 20 minutes ago 1.38GB
nso300.gitlab.local/cisco-nso-dev 5.3-rst 037b51416caa 20 minutes ago 1.38GB
debian buster 0980b84bde89 6 days ago 114MB
rst@rst:~/NSO/neds/ned-asa$
Step 11: Include the images as dependencies to the nso300 project. Enter the nso300/includes directory and create an image dependency for each NED type. The contents of the files should list the image name + tag for each NED. By including these images with the main NSO system image, the packages are added to the NSO running folder when building or rebuilding the image.
rst@rst:~/NSO/nso300$ ls
Dockerfile Dockerfile.in extra-files includes Makefile nid nidcommon.mk nidsystem.mk packages README.nid-system.org test-packages
rst@rst:~/NSO/nso300$ cd includes/
rst@rst:~/NSO/nso300/includes$ ls
rst@rst:~/NSO/nso300/includes$ **echo "${NSO\_IMAGE\_PATH}ned-ios/package:${NSO\_VERSION}" >> ned-ios**
rst@rst:~/NSO/nso300/includes$ **echo "${NSO\_IMAGE\_PATH}ned-iosxr/package:${NSO\_VERSION}" >> ned-iosxr**
rst@rst:~/NSO/nso300/includes$ **echo "${NSO\_IMAGE\_PATH}ned-nx/package:${NSO\_VERSION}" >> ned-nx**
rst@rst:~/NSO/nso300/includes$ **echo "${NSO\_IMAGE\_PATH}ned-asa/package:${NSO\_VERSION}" >> ned-asa**
Step 12: List the contents of the includes directory. You should now have four files there.
rst@rst:~/NSO/nso300/includes$ **ls**
ned-asa ned-ios ned-iosxr ned-nx
Step 13: Start the NED netsim images as a part of the test environment. You can do this by editing the Makefile of the nso300 project.
rst@rst:~/NSO/nso300/includes$ **cd ..**
rst@rst:~/NSO/nso300$ **code Makefile**
Step 14: Add an IOS device that is named CE11. You add the device by creating a Docker container from the devices’ NED image using the docker run command.
Note: Make sure you only use TAB spaces when indenting commands inside Makefiles. If you use normal whitespaces, you encounter errors. You can learn more about the docker run command from the following link https://docs.docker.com/engine/reference/run.
docker run -td –name $(CNT_PREFIX)-CE11 –network-alias CE11 $(DOCKER_ARGS) $(IMAGE_PATH)ned-ios/netsim:$(DOCKER_TAG)
export NSO\_VERSION=5.3
export NSO\_IMAGE\_PATH=nso300.gitlab.local/
export IMAGE\_PATH=nso300.gitlab.local/
include nidsystem.mk
testenv-start-extra:
@echo "\\n== Starting repository specific testenv"
docker run -td --name $(CNT\_PREFIX)-CE11 --network-alias CE11 $(DOCKER\_ARGS) $(IMAGE\_PATH)ned-ios/netsim:$(DOCKER\_TAG)
Step 15: Add the additional devices. Create a total of eight Cisco IOS devices (CE11, CE12, CE21, CE31, PE11, PE12, PE31, and SW31), one Cisco IOS XR NetSim device (PE21), and one Cisco NX device (SW32). Include an extra Cisco ASA device, named ASA41. Because the NSO project skeleton provides a –network parameter within ${DOCKER_ARGS} to the docker run command, all the containers are in the same Docker network and are therefore able to communicate with each other.
export NSO\_VERSION=5.3
export NSO\_IMAGE\_PATH=nso300.gitlab.local/
export IMAGE\_PATH=nso300.gitlab.local/
include nidsystem.mk
testenv-start-extra:
@echo "\\n== Starting repository specific testenv"
docker run -td --name $(CNT\_PREFIX)-CE11 --network-alias CE11 $(DOCKER\_ARGS) $(IMAGE\_PATH)ned-ios/netsim:$(DOCKER\_TAG)
**docker run -td --name $(CNT\_PREFIX)-CE12 --network-alias CE12 $(DOCKER\_ARGS) $(IMAGE\_PATH)ned-ios/netsim:$(DOCKER\_TAG)**
**docker run -td --name $(CNT\_PREFIX)-CE21 --network-alias CE21 $(DOCKER\_ARGS) $(IMAGE\_PATH)ned-ios/netsim:$(DOCKER\_TAG)**
**docker run -td --name $(CNT\_PREFIX)-CE31 --network-alias CE31 $(DOCKER\_ARGS) $(IMAGE\_PATH)ned-ios/netsim:$(DOCKER\_TAG)**
**docker run -td --name $(CNT\_PREFIX)-PE11 --network-alias PE11 $(DOCKER\_ARGS) $(IMAGE\_PATH)ned-ios/netsim:$(DOCKER\_TAG)**
**docker run -td --name $(CNT\_PREFIX)-PE12 --network-alias PE12 $(DOCKER\_ARGS) $(IMAGE\_PATH)ned-ios/netsim:$(DOCKER\_TAG)**
**docker run -td --name $(CNT\_PREFIX)-PE21 --network-alias PE21 $(DOCKER\_ARGS) $(IMAGE\_PATH)ned-iosxr/netsim:$(DOCKER\_TAG)**
**docker run -td --name $(CNT\_PREFIX)-PE31 --network-alias PE31 $(DOCKER\_ARGS) $(IMAGE\_PATH)ned-ios/netsim:$(DOCKER\_TAG)**
**docker run -td --name $(CNT\_PREFIX)-SW31 --network-alias SW31 $(DOCKER\_ARGS) $(IMAGE\_PATH)ned-ios/netsim:$(DOCKER\_TAG)**
**docker run -td --name $(CNT\_PREFIX)-SW32 --network-alias SW32 $(DOCKER\_ARGS) $(IMAGE\_PATH)ned-nx/netsim:$(DOCKER\_TAG)**
**docker run -td --name $(CNT\_PREFIX)-ASA41 --network-alias ASA41 $(DOCKER\_ARGS) $(IMAGE\_PATH)ned-asa/netsim:$(DOCKER\_TAG)**
testenv-test:
@echo "\\n== Running tests"
@echo "TODO: Fill in your tests here"
Step 16: Save the file and exit the file editor.
Step 17: Rebuild the test image to include the NED packages.
rst@rst:~/NSO/nso300$ make build
### OUTPUT OMITTED ###
Successfully built 1814d66a825e
Successfully tagged nso300.gitlab.local/nso300/nso:5.3-rst
rst@rst:~/NSO/nso300$
Step 18: Stop and start the Docker environment again.
Stopping and starting the NSO Docker environment deletes and recreates the Docker containers. This action means that any CDB configuration is lost, because the NSO System container will be deleted. Your packages remain, though, because they are not located in that container, but merely mounted. Note: If you forget to stop the test environment before starting it again, a conflict in container or network names occurs.
Note: If you forget to stop the test environment before starting it again, a conflict in container or network names occurs.
rst@rst:~/NSO/nso300$ **make testenv-stop**
docker ps -aq --filter label=com.cisco.nso.testenv.name=testenv-nso300-5.3-rst | xargs --no-run-if-empty docker rm -vf
da013d1f603d
docker network rm testenv-nso300-5.3-rst
rst@rst:~/NSO/nso300$ **make testenv-start**
docker network inspect testenv-nso300-5.3-rst >/dev/null 2>&1 || docker network create testenv-nso300-5.3-rst
4e01d917621f1cc7a256623fe3c1e1e33503ca01a3521ef6db65849405396161
docker run -td --name testenv-nso300-5.3-rst-nso --network-alias nso --network testenv-nso300-5.3-rst --label com.cisco.nso.testenv.name=testenv-nso300-5.3-rst --label com.cisco.nso.testenv.type=nso --volume /var/opt/ncs/packages -e DEBUGPY= --expose 5678 --publish-all -e ADMIN\_PASSWORD=NsoDocker1337 ${NSO\_EXTRA\_ARGS} nso300.gitlab.local/nso300/nso:5.3-rst
a6d2585236876a9cee30dfcc5ffe246ddcbc07a4029d79370b466f80b0900355
make testenv-start-extra
make\[1\]: Entering directory '/home/rst/NSO/nso300'
== Starting repository specific testenv
docker run -td --name testenv-nso300-5.3-rst-CE11 --network-alias CE11 --network testenv-nso300-5.3-rst --label com.cisco.nso.testenv.name=testenv-nso300-5.3-rst nso300.gitlab.local/ned-ios/netsim:5.3-rst
e7bdc1dd0b9098960211bae749aec4726382bd29fadfa82ed03f03094b01b95f
docker run -td --name testenv-nso300-5.3-rst-CE12 --network-alias CE12 --network testenv-nso300-5.3-rst --label com.cisco.nso.testenv.name=testenv-nso300-5.3-rst nso300.gitlab.local/ned-ios/netsim:5.3-rst
4fe4b64fcd7d5fa8eb28cb31620721ddea8878099f20aad330c96e21e9e927b2
docker run -td --name testenv-nso300-5.3-rst-CE21 --network-alias CE21 --network testenv-nso300-5.3-rst --label com.cisco.nso.testenv.name=testenv-nso300-5.3-rst nso300.gitlab.local/ned-ios/netsim:5.3-rst
a7d073e92a16cf19f5b99746ca0059ca29c903baf13742dc9c3657ea3b1687d0
docker run -td --name testenv-nso300-5.3-rst-CE31 --network-alias CE31 --network testenv-nso300-5.3-rst --label com.cisco.nso.testenv.name=testenv-nso300-5.3-rst nso300.gitlab.local/ned-ios/netsim:5.3-rst
1e4a2f01c2eb4655750b4269e29ea2c888417472c88cc72a921fa0c2498238c7
docker run -td --name testenv-nso300-5.3-rst-PE11 --network-alias PE11 --network testenv-nso300-5.3-rst --label com.cisco.nso.testenv.name=testenv-nso300-5.3-rst nso300.gitlab.local/ned-ios/netsim:5.3-rst
7a90292c6b98b61a156dc0b205606ea7ab71a1867dce1aa3e96db1e9f75aca3a
docker run -td --name testenv-nso300-5.3-rst-PE12 --network-alias PE12 --network testenv-nso300-5.3-rst --label com.cisco.nso.testenv.name=testenv-nso300-5.3-rst nso300.gitlab.local/ned-ios/netsim:5.3-rst
7bb98b17490e86758bd6628675dc114ca203185a3c138ae25a919bc03cf30cd8
docker run -td --name testenv-nso300-5.3-rst-PE21 --network-alias PE21 --network testenv-nso300-5.3-rst --label com.cisco.nso.testenv.name=testenv-nso300-5.3-rst nso300.gitlab.local/ned-iosxr/netsim:5.3-rst
b519185a9bf39960d713539453be28554d9ff8889c78c7ebbb1dd374650622ee
docker run -td --name testenv-nso300-5.3-rst-PE31 --network-alias PE31 --network testenv-nso300-5.3-rst --label com.cisco.nso.testenv.name=testenv-nso300-5.3-rst nso300.gitlab.local/ned-ios/netsim:5.3-rst
1edf8fb5642f97d77e384d56bff9e5bdc2eea418ff526a8dfc5098831c6a1250
docker run -td --name testenv-nso300-5.3-rst-SW31 --network-alias SW31 --network testenv-nso300-5.3-rst --label com.cisco.nso.testenv.name=testenv-nso300-5.3-rst nso300.gitlab.local/ned-ios/netsim:5.3-rst
47cab993db166cfbb105810844187bb0699e865f5e7eca9f3b89ac6d21d525ff
docker run -td --name testenv-nso300-5.3-rst-SW32 --network-alias SW32 --network testenv-nso300-5.3-rst --label com.cisco.nso.testenv.name=testenv-nso300-5.3-rst nso300.gitlab.local/ned-nx/netsim:5.3-rst
87f6ee4b8a7d12e117622d8749442618a784cb4be99908a2dcf931758a5dd20d
docker run -td --name testenv-nso300-5.3-rst-ASA41 --network-alias ASA41 --network testenv-nso300-5.3-rst --label com.cisco.nso.testenv.name=testenv-nso300-5.3-rst nso300.gitlab.local/ned-asa/netsim:5.3-rst
c4c81790d37285925ad889ec38832fe8f9daeeda7105221ed65ce7b084574ba3
make\[1\]: Leaving directory '/home/rst/NSO/nso300'
make testenv-wait-started-nso
make\[1\]: Entering directory '/home/rst/NSO/nso300'
NSO instance testenv-nso300-5.3-rst-nso has started
All NSO instance have started
make\[1\]: Leaving directory '/home/rst/NSO/nso300'
rst@rst:~/NSO/nso300$
Step 19: Enter the NSO System container shell and ping the CE11 device, which is in the same internal Docker network. The netsim images in your Docker environment are now online; however, they are not yet added to NSO.
rst@rst:~/NSO/nso300$ **make testenv-shell**
docker exec -it testenv-nso300-5.3-rst-nso bash -l
root@a6d258523687:/# **ping CE11**
PING CE11 (172.20.0.3) 56(84) bytes of data.
64 bytes from testenv-nso300-5.3-rst-CE11.testenv-nso300-5.3-rst (172.20.0.3): icmp\_seq=1 ttl=64 time=0.179 ms
64 bytes from testenv-nso300-5.3-rst-CE11.testenv-nso300-5.3-rst (172.20.0.3): icmp\_seq=2 ttl=64 time=0.090 ms
64 bytes from testenv-nso300-5.3-rst-CE11.testenv-nso300-5.3-rst (172.20.0.3): icmp\_seq=3 ttl=64 time=0.088 ms
64 bytes from testenv-nso300-5.3-rst-CE11.testenv-nso300-5.3-rst (172.20.0.3): icmp\_seq=4 ttl=64 time=0.088 ms
64 bytes from testenv-nso300-5.3-rst-CE11.testenv-nso300-5.3-rst (172.20.0.3): icmp\_seq=5 ttl=64 time=0.094 ms
64 bytes from testenv-nso300-5.3-rst-CE11.testenv-nso300-5.3-rst (172.20.0.3): icmp\_seq=6 ttl=64 time=0.091 ms
64 bytes from testenv-nso300-5.3-rst-CE11.testenv-nso300-5.3-rst (172.20.0.3): icmp\_seq=7 ttl=64 time=0.087 ms
64 bytes from testenv-nso300-5.3-rst-CE11.testenv-nso300-5.3-rst (172.20.0.3): icmp\_seq=8 ttl=64 time=0.087 ms
64 bytes from testenv-nso300-5.3-rst-CE11.testenv-nso300-5.3-rst (172.20.0.3): icmp\_seq=9 ttl=64 time=0.090 ms
64 bytes from testenv-nso300-5.3-rst-CE11.testenv-nso300-5.3-rst (172.20.0.3): icmp\_seq=10 ttl=64 time=0.090 ms
64 bytes from testenv-nso300-5.3-rst-CE11.testenv-nso300-5.3-rst (172.20.0.3): icmp\_seq=11 ttl=64 time=0.083 ms
64 bytes from testenv-nso300-5.3-rst-CE11.testenv-nso300-5.3-rst (172.20.0.3): icmp\_seq=12 ttl=64 time=0.084 ms
--- CE11 ping statistics ---
12 packets transmitted, 12 received, 0% packet loss, time 267ms
rtt min/avg/max/mdev = 0.083/0.095/0.179/0.028 ms
root@a6d258523687:/#
Step 20: Exit the NSO System container shell and open and display the ~/lab/devices.xml
file.
The devices.xml file contains the CDB configuration for the NetSim images. It can be compiled by manually exporting each NetSim device configuration but it is provided in this case.
Note: You can also import the devices by hand through the NSO CLI.
root@4e553741a013:/# **exit**
rst@rst:~/NSO/nso300$ **cat ~/lab/devices.xml**
<devices xmlns="http://tail-f.com/ns/ncs">
<authgroups>
<group>
<name>netsim</name>
<default-map>
<remote-name>admin</remote-name>
<remote-password>admin</remote-password>
</default-map>
</group>
</authgroups>
<device>
<name>CE11</name>
<address>CE11</address>
<authgroup>netsim</authgroup>
<device-type>
<cli>
<ned-id xmlns:cisco-ios-cli-6.42="http://tail-f.com/ns/ned-id/cisco-ios-cli-6.42">cisco-ios-cli-6.42:cisco-ios-cli-6.42</ned-id>
</cli>
</device-type>
<state>
<admin-state>unlocked</admin-state>
</state>
</device>
<device>
<name>CE12</name>
<address>CE12</address>
<authgroup>netsim</authgroup>
<device-type>
<cli>
<ned-id xmlns:cisco-ios-cli-6.42="http://tail-f.com/ns/ned-id/cisco-ios-cli-6.42">cisco-ios-cli-6.42:cisco-ios-cli-6.42</ned-id>
</cli>
</device-type>
<state>
<admin-state>unlocked</admin-state>
</state>
</device>
<device>
<name>CE21</name>
<address>CE21</address>
<authgroup>netsim</authgroup>
<device-type>
<cli>
<ned-id xmlns:cisco-ios-cli-6.42="http://tail-f.com/ns/ned-id/cisco-ios-cli-6.42">cisco-ios-cli-6.42:cisco-ios-cli-6.42</ned-id>
</cli>
</device-type>
<state>
<admin-state>unlocked</admin-state>
</state>
</device>
<device>
<name>CE31</name>
<address>CE31</address>
<authgroup>netsim</authgroup>
<device-type>
<cli>
<ned-id xmlns:cisco-ios-cli-6.42="http://tail-f.com/ns/ned-id/cisco-ios-cli-6.42">cisco-ios-cli-6.42:cisco-ios-cli-6.42</ned-id>
</cli>
</device-type>
<state>
<admin-state>unlocked</admin-state>
</state>
</device>
<device>
<name>PE11</name>
<address>PE11</address>
<authgroup>netsim</authgroup>
<device-type>
<cli>
<ned-id xmlns:cisco-ios-cli-6.42="http://tail-f.com/ns/ned-id/cisco-ios-cli-6.42">cisco-ios-cli-6.42:cisco-ios-cli-6.42</ned-id>
</cli>
</device-type>
<state>
<admin-state>unlocked</admin-state>
</state>
</device>
<device>
<name>PE12</name>
<address>PE12</address>
<authgroup>netsim</authgroup>
<device-type>
<cli>
<ned-id xmlns:cisco-ios-cli-6.42="http://tail-f.com/ns/ned-id/cisco-ios-cli-6.42">cisco-ios-cli-6.42:cisco-ios-cli-6.42</ned-id>
</cli>
</device-type>
<state>
<admin-state>unlocked</admin-state>
</state>
</device>
<device>
<name>PE21</name>
<address>PE21</address>
<authgroup>netsim</authgroup>
<device-type>
<cli>
<ned-id xmlns:cisco-iosxr-cli-7.18="http://tail-f.com/ns/ned-id/cisco-iosxr-cli-7.18">cisco-iosxr-cli-7.18:cisco-iosxr-cli-7.18</ned-id>
</cli>
</device-type>
<state>
<admin-state>unlocked</admin-state>
</state>
</device>
<device>
<name>PE31</name>
<address>PE31</address>
<authgroup>netsim</authgroup>
<device-type>
<cli>
<ned-id xmlns:cisco-ios-cli-6.42="http://tail-f.com/ns/ned-id/cisco-ios-cli-6.42">cisco-ios-cli-6.42:cisco-ios-cli-6.42</ned-id>
</cli>
</device-type>
<state>
<admin-state>unlocked</admin-state>
</state>
</device>
<device>
<name>SW31</name>
<address>SW31</address>
<authgroup>netsim</authgroup>
<device-type>
<cli>
<ned-id xmlns:cisco-ios-cli-6.42="http://tail-f.com/ns/ned-id/cisco-ios-cli-6.42">cisco-ios-cli-6.42:cisco-ios-cli-6.42</ned-id>
</cli>
</device-type>
<state>
<admin-state>unlocked</admin-state>
</state>
</device>
<device>
<name>SW32</name>
<address>SW32</address>
<authgroup>netsim</authgroup>
<device-type>
<cli>
<ned-id xmlns:cisco-nx-cli-5.13="http://tail-f.com/ns/ned-id/cisco-nx-cli-5.13">cisco-nx-cli-5.13:cisco-nx-cli-5.13</ned-id>
</cli>
</device-type>
<state>
<admin-state>unlocked</admin-state>
</state>
</device>
<device>
<name>ASA41</name>
<address>ASA41</address>
<authgroup>netsim</authgroup>
<device-type>
<cli>
<ned-id xmlns:cisco-asa-cli-6.7="http://tail-f.com/ns/ned-id/cisco-asa-cli-6.7">cisco-asa-cli-6.7:cisco-asa-cli-6.7</ned-id>
</cli>
</device-type>
<state>
<admin-state>unlocked</admin-state>
</state>
</device>
Step 21: Copy the ./solved_solutions/devices.xml file to ~/NSO/nso300/extra-files directory. This directory is used to include extra files with the NSO system image. These files are placed into the root of the resulting image.
rst@rst:~/NSO/nso300$ cp ../solved\_solutions/devices.xml extra-files/
Step 22: Stop, build, and start the Docker environment again for the extra files to appear in the NSO System container.
rst@rst:~/NSO/nso300$ **make testenv-stop** # Output Omitted #
rst@rst:~/NSO/nso300$ **make build** # Output Omitted #
rst@rst:~/NSO/nso300$ **make testenv-start** # Output Omitted #
Step 23: Open the Docker environment’s Makefile and add a testenv-configure
target. This target will be used to import and configure devices in your Docker environment.
rst@rst:~/NSO/nso300$ **code Makefile**
--------MAKEFILE--------
export NSO\_VERSION=5.3
export NSO\_IMAGE\_PATH=nso300.gitlab.local/
export IMAGE\_PATH=nso300.gitlab.local/
include nidsystem.mk
testenv-start-extra:
@echo "\\n== Starting repository specific testenv"
docker run -td --name $(CNT\_PREFIX)-CE11 --network-alias CE11 $(DOCKER\_ARGS) $(IMAGE\_PATH)ned-ios/netsim:$(DOCKER\_TAG)
docker run -td --name $(CNT\_PREFIX)-CE12 --network-alias CE12 $(DOCKER\_ARGS) $(IMAGE\_PATH)ned-ios/netsim:$(DOCKER\_TAG)
docker run -td --name $(CNT\_PREFIX)-CE21 --network-alias CE21 $(DOCKER\_ARGS) $(IMAGE\_PATH)ned-ios/netsim:$(DOCKER\_TAG)
docker run -td --name $(CNT\_PREFIX)-CE31 --network-alias CE31 $(DOCKER\_ARGS) $(IMAGE\_PATH)ned-ios/netsim:$(DOCKER\_TAG)
docker run -td --name $(CNT\_PREFIX)-PE11 --network-alias PE11 $(DOCKER\_ARGS) $(IMAGE\_PATH)ned-ios/netsim:$(DOCKER\_TAG)
docker run -td --name $(CNT\_PREFIX)-PE12 --network-alias PE12 $(DOCKER\_ARGS) $(IMAGE\_PATH)ned-ios/netsim:$(DOCKER\_TAG)
docker run -td --name $(CNT\_PREFIX)-PE21 --network-alias PE21 $(DOCKER\_ARGS) $(IMAGE\_PATH)ned-iosxr/netsim:$(DOCKER\_TAG)
docker run -td --name $(CNT\_PREFIX)-PE31 --network-alias PE31 $(DOCKER\_ARGS) $(IMAGE\_PATH)ned-ios/netsim:$(DOCKER\_TAG)
docker run -td --name $(CNT\_PREFIX)-SW31 --network-alias SW31 $(DOCKER\_ARGS) $(IMAGE\_PATH)ned-ios/netsim:$(DOCKER\_TAG)
docker run -td --name $(CNT\_PREFIX)-SW32 --network-alias SW32 $(DOCKER\_ARGS) $(IMAGE\_PATH)ned-nx/netsim:$(DOCKER\_TAG)
docker run -td --name $(CNT\_PREFIX)-ASA41 --network-alias ASA41 $(DOCKER\_ARGS) $(IMAGE\_PATH)ned-asa/netsim:$(DOCKER\_TAG)
testenv-configure:
@echo "Configuring test envrionment"
testenv-test:
@echo "\\n== Running tests"
@echo "TODO: Fill in your tests here"
Step 24: Add the NSO commands to import device configuration, fetch SSH keys, and synchronize the configuration from the devices. You can either use the Cisco- or Juniper-style CLI, by using the testenv-runcmdC or testenv-runcmdJ targets respectively to execute commands inside the NSO System container. The commands should simulate the input that the user creates through the NSO CLI. You can use the new-line character \n to simulate the user pressing the Enter key.
Note: Remember to make sure you only use TAB spaces when indenting commands inside Makefiles. If you use normal whitespaces, you will encounter errors.
Note: For more information on targets, open and study the nidsystem.mk file.
export NSO\_VERSION=5.3
export NSO\_IMAGE\_PATH=nso300.gitlab.local/
export IMAGE\_PATH=nso300.gitlab.local/
include nidsystem.mk
testenv-start-extra:
@echo "\\n== Starting repository specific testenv"
docker run -td --name $(CNT\_PREFIX)-CE11 --network-alias CE11 $(DOCKER\_ARGS) $(IMAGE\_PATH)ned-ios/netsim:$(DOCKER\_TAG)
docker run -td --name $(CNT\_PREFIX)-CE12 --network-alias CE12 $(DOCKER\_ARGS) $(IMAGE\_PATH)ned-ios/netsim:$(DOCKER\_TAG)
docker run -td --name $(CNT\_PREFIX)-CE21 --network-alias CE21 $(DOCKER\_ARGS) $(IMAGE\_PATH)ned-ios/netsim:$(DOCKER\_TAG)
docker run -td --name $(CNT\_PREFIX)-CE31 --network-alias CE31 $(DOCKER\_ARGS) $(IMAGE\_PATH)ned-ios/netsim:$(DOCKER\_TAG)
docker run -td --name $(CNT\_PREFIX)-PE11 --network-alias PE11 $(DOCKER\_ARGS) $(IMAGE\_PATH)ned-ios/netsim:$(DOCKER\_TAG)
docker run -td --name $(CNT\_PREFIX)-PE12 --network-alias PE12 $(DOCKER\_ARGS) $(IMAGE\_PATH)ned-ios/netsim:$(DOCKER\_TAG)
docker run -td --name $(CNT\_PREFIX)-PE21 --network-alias PE21 $(DOCKER\_ARGS) $(IMAGE\_PATH)ned-iosxr/netsim:$(DOCKER\_TAG)
docker run -td --name $(CNT\_PREFIX)-PE31 --network-alias PE31 $(DOCKER\_ARGS) $(IMAGE\_PATH)ned-ios/netsim:$(DOCKER\_TAG)
docker run -td --name $(CNT\_PREFIX)-SW31 --network-alias SW31 $(DOCKER\_ARGS) $(IMAGE\_PATH)ned-ios/netsim:$(DOCKER\_TAG)
docker run -td --name $(CNT\_PREFIX)-SW32 --network-alias SW32 $(DOCKER\_ARGS) $(IMAGE\_PATH)ned-nx/netsim:$(DOCKER\_TAG)
docker run -td --name $(CNT\_PREFIX)-ASA41 --network-alias ASA41 $(DOCKER\_ARGS) $(IMAGE\_PATH)ned-asa/netsim:$(DOCKER\_TAG)
testenv-configure:
@echo "Configuring test envrionment"
$(MAKE) testenv-runcmdJ CMD="configure \\nload merge /devices.xml \\ncommit \\nexit"
$(MAKE) testenv-runcmdJ CMD="request devices fetch-ssh-host-keys"
$(MAKE) testenv-runcmdJ CMD="request devices sync-from"
testenv-test:
@echo "\\n== Running tests"
@echo "TODO: Fill in your tests here"
Step 25: Save and exit the file.
Step 26: Import the devices with the make testenv-configure command that you created in the Makefile. Make sure that the devices end up in-sync.
rst@rst:~/NSO/nso300$ **make testenv-configure**
Configuring test envrionment
make testenv-runcmdJ CMD="configure \\nload merge /devices.xml \\ncommit \\nexit"
make\[1\]: Entering directory '/home/rst/NSO/nso300'
docker exec -t testenv-nso300-5.3-rst-nso bash -lc 'echo -e "configure \\nload merge /devices.xml \\ncommit \\nexit" | ncs\_cli --noninteractive --stop-on-error -Ju admin'
Commit complete.
make\[1\]: Leaving directory '/home/rst/NSO/nso300'
make testenv-runcmdJ CMD="request devices fetch-ssh-host-keys"
make\[1\]: Entering directory '/home/rst/NSO/nso300'
docker exec -t testenv-nso300-5.3-rst-nso bash -lc 'echo -e "request devices fetch-ssh-host-keys" | ncs\_cli --noninteractive --stop-on-error -Ju admin'
fetch-result {
device ASA41
result updated
fingerprint {
algorithm ssh-rsa
value e4:f5:d0:0f:9c:18:40:c3:f2:1d:41:2c:3d:f3:74:63
}
}
fetch-result {
device CE11
result updated
fingerprint {
algorithm ssh-rsa
value e4:f5:d0:0f:9c:18:40:c3:f2:1d:41:2c:3d:f3:74:63
}
}
fetch-result {
device CE12
result updated
fingerprint {
algorithm ssh-rsa
value e4:f5:d0:0f:9c:18:40:c3:f2:1d:41:2c:3d:f3:74:63
}
}
fetch-result {
device CE21
result updated
fingerprint {
algorithm ssh-rsa
value e4:f5:d0:0f:9c:18:40:c3:f2:1d:41:2c:3d:f3:74:63
}
}
fetch-result {
device CE31
result updated
fingerprint {
algorithm ssh-rsa
value e4:f5:d0:0f:9c:18:40:c3:f2:1d:41:2c:3d:f3:74:63
}
}
fetch-result {
device PE11
result updated
fingerprint {
algorithm ssh-rsa
value e4:f5:d0:0f:9c:18:40:c3:f2:1d:41:2c:3d:f3:74:63
}
}
fetch-result {
device PE12
result updated
fingerprint {
algorithm ssh-rsa
value e4:f5:d0:0f:9c:18:40:c3:f2:1d:41:2c:3d:f3:74:63
}
}
fetch-result {
device PE21
result updated
fingerprint {
algorithm ssh-rsa
value e4:f5:d0:0f:9c:18:40:c3:f2:1d:41:2c:3d:f3:74:63
}
}
fetch-result {
device PE31
result updated
fingerprint {
algorithm ssh-rsa
value e4:f5:d0:0f:9c:18:40:c3:f2:1d:41:2c:3d:f3:74:63
}
}
fetch-result {
device SW31
result updated
fingerprint {
algorithm ssh-rsa
value e4:f5:d0:0f:9c:18:40:c3:f2:1d:41:2c:3d:f3:74:63
}
}
fetch-result {
device SW32
result updated
fingerprint {
algorithm ssh-rsa
value e4:f5:d0:0f:9c:18:40:c3:f2:1d:41:2c:3d:f3:74:63
}
}
make\[1\]: Leaving directory '/home/rst/NSO/nso300'
make testenv-runcmdJ CMD="request devices sync-from"
make\[1\]: Entering directory '/home/rst/NSO/nso300'
docker exec -t testenv-nso300-5.3-rst-nso bash -lc 'echo -e "request devices sync-from" | ncs\_cli --noninteractive --stop-on-error -Ju admin'
sync-result {
device ASA41
result true
}
sync-result {
device CE11
result true
}
sync-result {
device CE12
result true
}
sync-result {
device CE21
result true
}
sync-result {
device CE31
result true
}
sync-result {
device PE11
result true
}
sync-result {
device PE12
result true
}
sync-result {
device PE21
result true
}
sync-result {
device PE31
result true
}
sync-result {
device SW31
result true
}
sync-result {
device SW32
result true
}
make\[1\]: Leaving directory '/home/rst/NSO/nso300'
rst@rst:~/NSO/nso300$
Step 27: Enter the NSO System container, switch the CLI type, and list the devices. All eleven devices should be present.
rst@rst:~/NSO/nso300$ **make testenv-cli**
docker exec -it testenv-nso300-5.3-rst-nso bash -lc 'ncs\_cli -u admin'
admin connected from 127.0.0.1 using console on 83dc28b9733d
admin@ncs> **switch cli**
admin@ncs# **show devices list**
NAME ADDRESS DESCRIPTION NED ID ADMIN STATE
--------------------------------------------------------------
ASA41 ASA41 - cisco-asa-cli-6.7 unlocked
CE11 CE11 - cisco-ios-cli-6.42 unlocked
CE12 CE12 - cisco-ios-cli-6.42 unlocked
CE21 CE21 - cisco-ios-cli-6.42 unlocked
CE31 CE31 - cisco-ios-cli-6.42 unlocked
PE11 PE11 - cisco-ios-cli-6.42 unlocked
PE12 PE12 - cisco-ios-cli-6.42 unlocked
PE21 PE21 - cisco-iosxr-cli-7.18 unlocked
PE31 PE31 - cisco-ios-cli-6.42 unlocked
SW31 SW31 - cisco-ios-cli-6.42 unlocked
SW32 SW32 - cisco-nx-cli-5.13 unlocked
admin@ncs#
Step 28: Exit the NSO System container.
admin@ncs# **exit**
In this task, you will learn how to develop an NSO package using NSO in Docker
Step 1: Create and run a development container using the make dev-shell command.
This creates a container that contains an ncsc YANG compiler and a Java compiler, has ncs commands added to the path and includes other useful tools that you can use for NSO package development.
rst@rst:~/NSO/nso300$ **make dev-shell**
docker run -it -v $(pwd):/src nso300.gitlab.local/cisco-nso-dev:5.3
root@13beec21ff16:/#
Step 2: Navigate to the /src/packages
folder.
root@13beec21ff16:/# **cd src/packages/**
root@13beec21ff16:/src/packages#
Step 3: Create a template-based hostname package using the ncs-make-package
command.
root@13beec21ff16:/src/packages# **ncs-make-package --service-skeleton template hostname**
Step 4: List the contents of the current folder. It should contain the hostname package.
root@13beec21ff16:/src/packages# ls
hostname
Step 5: Change the ownership of the package to a non-root user. Owner with the UID 1000 is the user ‘rst’ in this case.
root@13beec21ff16:/src/packages# **chown -Rv 1000:1000 hostname**
changed ownership of 'hostname/templates/hostname-template.xml' from root:root to 1000:1000
changed ownership of 'hostname/templates' from root:root to 1000:1000
changed ownership of 'hostname/src/yang/hostname.yang' from root:root to 1000:1000
changed ownership of 'hostname/src/yang' from root:root to 1000:1000
changed ownership of 'hostname/src/Makefile' from root:root to 1000:1000
changed ownership of 'hostname/src' from root:root to 1000:1000
changed ownership of 'hostname/test/internal/lux/basic/run.lux' from root:root to 1000:1000
changed ownership of 'hostname/test/internal/lux/basic/Makefile' from root:root to 1000:1000
changed ownership of 'hostname/test/internal/lux/basic' from root:root to 1000:1000
changed ownership of 'hostname/test/internal/lux/Makefile' from root:root to 1000:1000
changed ownership of 'hostname/test/internal/lux' from root:root to 1000:1000
changed ownership of 'hostname/test/internal/Makefile' from root:root to 1000:1000
changed ownership of 'hostname/test/internal' from root:root to 1000:1000
changed ownership of 'hostname/test/Makefile' from root:root to 1000:1000
changed ownership of 'hostname/test' from root:root to 1000:1000
changed ownership of 'hostname/package-meta-data.xml' from root:root to 1000:1000
changed ownership of 'hostname' from root:root to 1000:1000
Step 6: Exit the development container.
root@13beec21ff16:/src/packages# exit
logout
rst@rst:~/NSO/nso300$
Step 7: List the contents of the packages directory. The hostname package is present here too. This is because the packages directory is bind mounted to the NSO Docker development container.
rst@rst:~/NSO/nso300$ ls packages/
hostname
rst@rst:~/NSO/nso300$
Step 8: Enter the NSO CLI in the NSO system container and switch the CLI type.
rst@rst:~/NSO/nso300$ **make testenv-cli**
docker exec -it testenv-nso300-5.3-rst-nso bash -lc 'ncs\_cli -u admin'
admin connected from 127.0.0.1 using console on 83dc28b9733d
admin@ncs> **switch cli**
Step 9: Configure the hostname of one Cisco-IOS, Cisco-IOSXR, Cisco-NX, and Cisco ASA device.
admin@ncs# **config**
Entering configuration mode terminal
admin@ncs(config)# devices device CE11 config hostname CE11.nso300.local
admin@ncs(config-config)# top
admin@ncs(config)# devices device PE21 config hostname PE21.nso300.local
admin@ncs(config-config)# top
admin@ncs(config)# devices device SW32 config hostname SW32.nso300.local
admin@ncs(config-config)# top
admin@ncs(config)# devices device ASA41 config hostname ASA41.nso300.local
admin@ncs(config-config)# top
Step 10: Make a dry run of the commit and save the configuration output to clipboard or to a text editor. This configuration will be used to create a service template.
admin@ncs(config-config)# **commit dry-run outformat xml**
result-xml {
local-node {
data <devices xmlns="http://tail-f.com/ns/ncs">
<device>
<name>ASA41</name>
<config>
<hostname xmlns="http://cisco.com/ned/asa">ASA41.nso300.local</hostname>
</config>
</device>
<device>
<name>CE11</name>
<config>
<hostname xmlns="urn:ios">CE11.nso300.local</hostname>
</config>
</device>
<device>
<name>PE21</name>
<config>
<hostname xmlns="http://tail-f.com/ned/cisco-ios-xr">PE21.nso300.local</hostname>
</config>
</device>
<device>
<name>SW32</name>
<config>
<hostname xmlns="http://tail-f.com/ned/cisco-nx">SW32.nso300.local</hostname>
</config>
</device>
</devices>
}
}
Step 11: Exit the NSO CLI and NSO System container.
admin@ncs(config)# **abort**
admin@ncs# **exit**
rst@rst:~/NSO/nso300$
Step 12: Open the packages/hostname/src/yang/hostname.yang file.
rst@rst:~/NSO/nso300$ code packages/hostname/src/yang/hostname.yang
This is how the YANG model should appear when you open it for the first time
module hostname {
namespace "http://com/example/hostname";
prefix hostname;
import ietf-inet-types {
prefix inet;
}
import tailf-ncs {
prefix ncs;
}
list hostname {
key name;
uses ncs:service-data;
ncs:servicepoint "hostname";
leaf name {
type string;
}
// may replace this with other ways of refering to the devices.
leaf-list device {
type leafref {
path "/ncs:devices/ncs:device/ncs:name";
}
}
// replace with your own stuff here
leaf dummy {
type inet:ipv4-address;
}
}
}
Step 13: Remove the dummy code and the name leaf, import the tailf-common module, and add a hostname leaf.
import tailf-common {
prefix tailf;
}
leaf hostname {
tailf:info "Device hostname";
type string;
}
Step 14: Change the device YANG type from leaf-list to a leaf and make it the service key.
module hostname {
namespace "http://com/example/hostname";
prefix hostname;
import ietf-inet-types {
prefix inet;
}
import tailf-ncs {
prefix ncs;
}
import tailf-common {
prefix tailf;
}
list hostname {
key device;
uses ncs:service-data;
ncs:servicepoint "hostname";
leaf device {
type leafref {
path "/ncs:devices/ncs:device/ncs:name";
}
}
leaf hostname {
tailf:info "Device hostname";
type string;
}
}
}
Step 15: Save and exit the file.
Step 16: Open the packages/hostname/template/hostname-template.xml file.
rst@rst:~/NSO/nso300$ **code packages/hostname/templates/hostname-template.xml**
This is how the file should appear when you open it for the first time.
result-xml {
local-node {
data <devices xmlns="http://tail-f.com/ns/ncs">
<device>
<name>ASA41</name>
<config>
<hostname xmlns="http://cisco.com/ned/asa">ASA41.nso300.local</hostname>
</config>
</device>
<device>
<name>CE11</name>
<config>
<hostname xmlns="urn:ios">CE11.nso300.local</hostname>
</config>
</device>
<device>
<name>PE21</name>
<config>
<hostname xmlns="http://tail-f.com/ned/cisco-ios-xr">PE21.nso300.local</hostname>
</config>
</device>
<device>
<name>SW32</name>
<config>
<hostname xmlns="http://tail-f.com/ned/cisco-nx">SW32.nso300.local</hostname>
</config>
</device>
</devices>
}
}
Step 17: Remove the comments and replace the device configuration with the configuration produced by the dry-run of the commit a few steps back.
<config-template xmlns="http://tail-f.com/ns/config/1.0"
servicepoint="hostname">
<devices xmlns="http://tail-f.com/ns/ncs">
<device>
<name>ASA41</name>
<config>
<hostname xmlns="http://cisco.com/ned/asa">ASA41.nso300.local</hostname>
</config>
</device>
<device>
<name>CE11</name>
<config>
<hostname xmlns="urn:ios">CE11.nso300.local</hostname>
</config>
</device>
<device>
<name>PE21</name>
<config>
<hostname xmlns="http://tail-f.com/ned/cisco-ios-xr">PE21.nso300.local</hostname>
</config>
</device>
<device>
<name>SW32</name>
<config>
<hostname xmlns="http://tail-f.com/ned/cisco-nx">SW32.nso300.local</hostname>
</config>
</device>
</devices>
</config-template>
Step 18: Replace the hardcoded values with variables used in the YANG model.
<config-template xmlns="http://tail-f.com/ns/config/1.0"
servicepoint="hostname">
<devices xmlns="http://tail-f.com/ns/ncs">
<device>
<name>**{/device}**</name>
<config>
<hostname xmlns="http://cisco.com/ned/asa">**{/hostname}**</hostname>
</config>
</device>
<device>
<name>**{/device}**</name>
<config>
<hostname xmlns="urn:ios">**{/hostname}**</hostname>
</config>
</device>
<device>
<name>**{/device}**</name>
<config>
<hostname xmlns="http://tail-f.com/ned/cisco-ios-xr">**{/hostname}**</hostname>
</config>
</device>
<device>
<name>**{/device}**</name>
<config>
<hostname xmlns="http://tail-f.com/ned/cisco-nx">**{/hostname}**</hostname>
</config>
</device>
</devices>
</config-template>
Step 19: Optimize the template since most of the elements are duplicate. Keep only one device element in which the hostname is set depending on the namespace.
<config-template xmlns="http://tail-f.com/ns/config/1.0"
servicepoint="hostname">
<devices xmlns="http://tail-f.com/ns/ncs">
<device>
<name>{/device}</name>
<config>
**<hostname xmlns="http://cisco.com/ned/asa">{/hostname}</hostname>**
**<hostname xmlns="urn:ios">{/hostname}</hostname>**
**<hostname xmlns="http://tail-f.com/ned/cisco-ios-xr">{/hostname}</hostname>**
**<hostname xmlns="http://tail-f.com/ned/cisco-nx">{/hostname}</hostname>**
</config>
</device>
</devices>
</config-template>
Step 20: Save the file and exit the file editor.
Step 21: Rebuild the package using the make testenv-build
command. This command can figure out what (if any) kinds of changes have been done to the packages. It then performs a package reload or just a package re-deploy accordingly.
Note: Instead of using the make testenv-build
command, you can reload the packages by recompiling them using the make
command from the /src directory and then reloading them by using the reload packages
command from the NSO CLI.
rst@rst:~/NSO/nso300$ make testenv-build
for NSO in $(docker ps --format '{{.Names}}' --filter label=com.cisco.nso.testenv.name=testenv-nso300-5.3-rst --filter label=com.cisco.nso.testenv.type=nso); do \\
echo "-- Rebuilding for NSO: ${NSO}"; \\
docker run -it --rm -v /home/rst/NSO/nso300:/src --volumes-from ${NSO} --network=container:${NSO} -e NSO=${NSO} -e PACKAGE\_RELOAD= -e SKIP\_LINT= -e PKG\_FILE=nso300.gitlab.local/nso300/package:5.3-rst nso300.gitlab.local/cisco-nso-dev:5.3 /src/nid/testenv-build; \\
done
### OUTPUT OMITTED ###
reload-result {
package cisco-ios-cli-6.42
result true
}
reload-result {
package cisco-iosxr-cli-7.18
result true
}
reload-result {
package cisco-nx-cli-5.13
result true
}
reload-result {
package hostname
result true
}
Step 22: Enter the NSO CLI in NSO System container and switch the CLI type.
rst@rst:~/NSO/nso300$ **make testenv-cli**
docker exec -it testenv-nso300-5.3-rst-nso bash -lc 'ncs\_cli -u admin'
admin connected from 127.0.0.1 using console on 83dc28b9733d
admin@ncs> **switch cli**
admin@ncs#
Step 23: Configure an instance of the hostname service for the PE21 device and verify the dry-run configuration.
admin@ncs# **config**
Entering configuration mode terminal
admin@ncs(config)# **hostname PE21 hostname PE21.nso300.local**
admin@ncs(config-hostname-PE21)# **top**
admin@ncs(config)# **commit dry-run**
cli {
local-node {
data devices {
device PE21 {
config {
+ hostname PE21.nso300.local;
}
}
}
+hostname PE21 {
+ hostname PE21.nso300.local;
+}
}
}
admin@ncs(config)#
Step 24: Commit the transaction and exit the NSO CLI.
admin@ncs(config)# **commit**
Commit complete.
admin@ncs(config)# **exit**
admin@ncs# exit
rst@rst:~/NSO/nso300$
Step 25: Enter the NSO System Linux shell with the make testenv-shell
command.
rst@rst:~/NSO/nso300$ **make testenv-shell**
docker exec -it testenv-nso300-5.3-rst-nso bash -l
Step 26: Connect to the PE21 netsim device. Use the ssh command with the username admin and password admin to establish an SSH connection.
root@83dc28b9733d:/# **ssh admin@PE21**
The authenticity of host 'pe21 (172.21.0.9)' can't be established.
RSA key fingerprint is SHA256:1jUV89dyxYHqWQosW480ckPlIl6fQDtCTpM3PQwHhrM.
Are you sure you want to continue connecting (yes/no)? **yes**
Warning: Permanently added 'pe21,172.21.0.9' (RSA) to the list of known hosts.
admin@pe21's password:
admin connected from 172.21.0.2 using ssh on 939be4fa57e4
939be4fa57e4#
Step 27: Verify that the configuration (hostname) has been successfully applied to the device.
939be4fa57e4# **show running-config hostname**
hostname PE21.nso300.local
939be4fa57e4#
Step 28: Terminate the session and exit the NSO System container.
939be4fa57e4# **exit**
Connection to pe21 closed.
root@83dc28b9733d:/# **exit**
logout
rst@rst:~/NSO/nso300$
In this task, you will create an automated test procedure for an NSO package using NSO in Docker. The test procedure is a result of how quickly this NSO Docker environment is set up and provides you with a simple but powerful framework for testing your NSO packages.
Step 1: Open the Makefile for the nso300 project.
The following output shows how the file should appear when you open it. In this activity, you will edit the testenv-test target to create a simple automated testing procedure.
rst@rst:~/NSO/nso300$ **code Makefile**
----------
export NSO\_VERSION=5.3
export NSO\_IMAGE\_PATH=nso300.gitlab.local/
export IMAGE\_PATH=nso300.gitlab.local/
include nidsystem.mk
testenv-start-extra:
@echo "\\n== Starting repository specific testenv"
docker run -td --name $(CNT\_PREFIX)-CE11 --network-alias CE11 $(DOCKER\_ARGS) $(IMAGE\_PATH)ned-ios/netsim:$(DOCKER\_TAG)
docker run -td --name $(CNT\_PREFIX)-CE12 --network-alias CE12 $(DOCKER\_ARGS) $(IMAGE\_PATH)ned-ios/netsim:$(DOCKER\_TAG)
docker run -td --name $(CNT\_PREFIX)-CE21 --network-alias CE21 $(DOCKER\_ARGS) $(IMAGE\_PATH)ned-ios/netsim:$(DOCKER\_TAG)
docker run -td --name $(CNT\_PREFIX)-CE31 --network-alias CE31 $(DOCKER\_ARGS) $(IMAGE\_PATH)ned-ios/netsim:$(DOCKER\_TAG)
docker run -td --name $(CNT\_PREFIX)-PE11 --network-alias PE11 $(DOCKER\_ARGS) $(IMAGE\_PATH)ned-ios/netsim:$(DOCKER\_TAG)
docker run -td --name $(CNT\_PREFIX)-PE12 --network-alias PE12 $(DOCKER\_ARGS) $(IMAGE\_PATH)ned-ios/netsim:$(DOCKER\_TAG)
docker run -td --name $(CNT\_PREFIX)-PE21 --network-alias PE21 $(DOCKER\_ARGS) $(IMAGE\_PATH)ned-iosxr/netsim:$(DOCKER\_TAG)
docker run -td --name $(CNT\_PREFIX)-PE31 --network-alias PE31 $(DOCKER\_ARGS) $(IMAGE\_PATH)ned-ios/netsim:$(DOCKER\_TAG)
docker run -td --name $(CNT\_PREFIX)-SW31 --network-alias SW31 $(DOCKER\_ARGS) $(IMAGE\_PATH)ned-ios/netsim:$(DOCKER\_TAG)
docker run -td --name $(CNT\_PREFIX)-SW32 --network-alias SW32 $(DOCKER\_ARGS) $(IMAGE\_PATH)ned-nx/netsim:$(DOCKER\_TAG)
docker run -td --name $(CNT\_PREFIX)-ASA41 --network-alias ASA41 $(DOCKER\_ARGS) $(IMAGE\_PATH)ned-asa/netsim:$(DOCKER\_TAG)
testenv-configure:
@echo "Configuring test envrionment"
$(MAKE) testenv-runcmdJ CMD="configure \\nload merge /devices.xml \\ncommit \\nexit"
$(MAKE) testenv-runcmdJ CMD="request devices fetch-ssh-host-keys"
$(MAKE) testenv-runcmdJ CMD="request devices sync-from"
testenv-test:
@echo "\\n== Running tests"
@echo "TODO: Fill in your tests here"
Step 2: Create a set of commands that will deploy an instance of the hostname service on the SW32 device and verify that the hostname has been set. To verify this, you can grep the CLI output by piping the CLI output and searching it with the grep command.
Note: If your tests require some pre-existing device configuration, you can always use the ncs_load command to load some configuration into CDB before running the tests.
export NSO\_VERSION=5.3
export NSO\_IMAGE\_PATH=nso300.gitlab.local/
export IMAGE\_PATH=nso300.gitlab.local/
include nidsystem.mk
testenv-start-extra:
@echo "\\n== Starting repository specific testenv"
docker run -td --name $(CNT\_PREFIX)-CE11 --network-alias CE11 $(DOCKER\_ARGS) $(IMAGE\_PATH)ned-ios/netsim:$(DOCKER\_TAG)
docker run -td --name $(CNT\_PREFIX)-CE12 --network-alias CE12 $(DOCKER\_ARGS) $(IMAGE\_PATH)ned-ios/netsim:$(DOCKER\_TAG)
docker run -td --name $(CNT\_PREFIX)-CE21 --network-alias CE21 $(DOCKER\_ARGS) $(IMAGE\_PATH)ned-ios/netsim:$(DOCKER\_TAG)
docker run -td --name $(CNT\_PREFIX)-CE31 --network-alias CE31 $(DOCKER\_ARGS) $(IMAGE\_PATH)ned-ios/netsim:$(DOCKER\_TAG)
docker run -td --name $(CNT\_PREFIX)-PE11 --network-alias PE11 $(DOCKER\_ARGS) $(IMAGE\_PATH)ned-ios/netsim:$(DOCKER\_TAG)
docker run -td --name $(CNT\_PREFIX)-PE12 --network-alias PE12 $(DOCKER\_ARGS) $(IMAGE\_PATH)ned-ios/netsim:$(DOCKER\_TAG)
docker run -td --name $(CNT\_PREFIX)-PE21 --network-alias PE21 $(DOCKER\_ARGS) $(IMAGE\_PATH)ned-iosxr/netsim:$(DOCKER\_TAG)
docker run -td --name $(CNT\_PREFIX)-PE31 --network-alias PE31 $(DOCKER\_ARGS) $(IMAGE\_PATH)ned-ios/netsim:$(DOCKER\_TAG)
docker run -td --name $(CNT\_PREFIX)-SW31 --network-alias SW31 $(DOCKER\_ARGS) $(IMAGE\_PATH)ned-ios/netsim:$(DOCKER\_TAG)
docker run -td --name $(CNT\_PREFIX)-SW32 --network-alias SW32 $(DOCKER\_ARGS) $(IMAGE\_PATH)ned-nx/netsim:$(DOCKER\_TAG)
docker run -td --name $(CNT\_PREFIX)-ASA41 --network-alias ASA41 $(DOCKER\_ARGS) $(IMAGE\_PATH)ned-asa/netsim:$(DOCKER\_TAG)
testenv-configure:
@echo "Configuring test envrionment"
$(MAKE) testenv-runcmdJ CMD="configure \\nload merge /devices.xml \\ncommit \\nexit"
$(MAKE) testenv-runcmdJ CMD="request devices fetch-ssh-host-keys"
$(MAKE) testenv-runcmdJ CMD="request devices sync-from"
testenv-test:
@echo "\\n== Running tests"
**@echo "Hostname package tests"**
**$(MAKE) testenv-runcmdJ CMD="configure \\nset hostname SW32 hostname SW32.nso300.local \\ncommit"**
**$(MAKE) testenv-runcmdJ CMD="show configuration devices device SW32 config hostname" | grep "SW32.nso300.local"**
Step 3: Add an extra command that ensures that the hostname is not set beforehand. You can do this action by counting the number of occurrences that grep finds by using the -c flag and making sure that number is 0.
export NSO\_VERSION=5.3
export NSO\_IMAGE\_PATH=nso300.gitlab.local/
export IMAGE\_PATH=nso300.gitlab.local/
include nidsystem.mk
testenv-start-extra:
@echo "\\n== Starting repository specific testenv"
docker run -td --name $(CNT\_PREFIX)-CE11 --network-alias CE11 $(DOCKER\_ARGS) $(IMAGE\_PATH)ned-ios/netsim:$(DOCKER\_TAG)
docker run -td --name $(CNT\_PREFIX)-CE12 --network-alias CE12 $(DOCKER\_ARGS) $(IMAGE\_PATH)ned-ios/netsim:$(DOCKER\_TAG)
docker run -td --name $(CNT\_PREFIX)-CE21 --network-alias CE21 $(DOCKER\_ARGS) $(IMAGE\_PATH)ned-ios/netsim:$(DOCKER\_TAG)
docker run -td --name $(CNT\_PREFIX)-CE31 --network-alias CE31 $(DOCKER\_ARGS) $(IMAGE\_PATH)ned-ios/netsim:$(DOCKER\_TAG)
docker run -td --name $(CNT\_PREFIX)-PE11 --network-alias PE11 $(DOCKER\_ARGS) $(IMAGE\_PATH)ned-ios/netsim:$(DOCKER\_TAG)
docker run -td --name $(CNT\_PREFIX)-PE12 --network-alias PE12 $(DOCKER\_ARGS) $(IMAGE\_PATH)ned-ios/netsim:$(DOCKER\_TAG)
docker run -td --name $(CNT\_PREFIX)-PE21 --network-alias PE21 $(DOCKER\_ARGS) $(IMAGE\_PATH)ned-iosxr/netsim:$(DOCKER\_TAG)
docker run -td --name $(CNT\_PREFIX)-PE31 --network-alias PE31 $(DOCKER\_ARGS) $(IMAGE\_PATH)ned-ios/netsim:$(DOCKER\_TAG)
docker run -td --name $(CNT\_PREFIX)-SW31 --network-alias SW31 $(DOCKER\_ARGS) $(IMAGE\_PATH)ned-ios/netsim:$(DOCKER\_TAG)
docker run -td --name $(CNT\_PREFIX)-SW32 --network-alias SW32 $(DOCKER\_ARGS) $(IMAGE\_PATH)ned-nx/netsim:$(DOCKER\_TAG)
docker run -td --name $(CNT\_PREFIX)-ASA41 --network-alias ASA41 $(DOCKER\_ARGS) $(IMAGE\_PATH)ned-asa/netsim:$(DOCKER\_TAG)
testenv-configure:
@echo "Configuring test envrionment"
$(MAKE) testenv-runcmdJ CMD="configure \\nload merge /devices.xml \\ncommit \\nexit"
$(MAKE) testenv-runcmdJ CMD="request devices fetch-ssh-host-keys"
$(MAKE) testenv-runcmdJ CMD="request devices sync-from"
testenv-test:
@echo "\\n== Running tests"
@echo "Hostname package tests"
**$(MAKE) testenv-runcmdJ CMD="show configuration devices device SW32 config hostname" | grep -c "SW32.nso300.local" | grep 0**
$(MAKE) testenv-runcmdJ CMD="configure \\nset hostname SW32 hostname SW32.nso300.local \\ncommit"
$(MAKE) testenv-runcmdJ CMD="show configuration devices device SW32 config hostname" | grep "SW32.nso300.local"
Step 4: Finally, add a set of commands to clean up the configuration after the test has been completed. Ensure that the hostname is not set anymore at the end with the command from the previous step.
export NSO\_VERSION=5.3
export NSO\_IMAGE\_PATH=nso300.gitlab.local/
export IMAGE\_PATH=nso300.gitlab.local/
include nidsystem.mk
testenv-start-extra:
@echo "\\n== Starting repository specific testenv"
docker run -td --name $(CNT\_PREFIX)-CE11 --network-alias CE11 $(DOCKER\_ARGS) $(IMAGE\_PATH)ned-ios/netsim:$(DOCKER\_TAG)
docker run -td --name $(CNT\_PREFIX)-CE12 --network-alias CE12 $(DOCKER\_ARGS) $(IMAGE\_PATH)ned-ios/netsim:$(DOCKER\_TAG)
docker run -td --name $(CNT\_PREFIX)-CE21 --network-alias CE21 $(DOCKER\_ARGS) $(IMAGE\_PATH)ned-ios/netsim:$(DOCKER\_TAG)
docker run -td --name $(CNT\_PREFIX)-CE31 --network-alias CE31 $(DOCKER\_ARGS) $(IMAGE\_PATH)ned-ios/netsim:$(DOCKER\_TAG)
docker run -td --name $(CNT\_PREFIX)-PE11 --network-alias PE11 $(DOCKER\_ARGS) $(IMAGE\_PATH)ned-ios/netsim:$(DOCKER\_TAG)
docker run -td --name $(CNT\_PREFIX)-PE12 --network-alias PE12 $(DOCKER\_ARGS) $(IMAGE\_PATH)ned-ios/netsim:$(DOCKER\_TAG)
docker run -td --name $(CNT\_PREFIX)-PE21 --network-alias PE21 $(DOCKER\_ARGS) $(IMAGE\_PATH)ned-iosxr/netsim:$(DOCKER\_TAG)
docker run -td --name $(CNT\_PREFIX)-PE31 --network-alias PE31 $(DOCKER\_ARGS) $(IMAGE\_PATH)ned-ios/netsim:$(DOCKER\_TAG)
docker run -td --name $(CNT\_PREFIX)-SW31 --network-alias SW31 $(DOCKER\_ARGS) $(IMAGE\_PATH)ned-ios/netsim:$(DOCKER\_TAG)
docker run -td --name $(CNT\_PREFIX)-SW32 --network-alias SW32 $(DOCKER\_ARGS) $(IMAGE\_PATH)ned-nx/netsim:$(DOCKER\_TAG)
docker run -td --name $(CNT\_PREFIX)-ASA41 --network-alias ASA41 $(DOCKER\_ARGS) $(IMAGE\_PATH)ned-asa/netsim:$(DOCKER\_TAG)
testenv-configure:
@echo "Configuring test envrionment"
$(MAKE) testenv-runcmdJ CMD="configure \\nload merge /devices.xml \\ncommit \\nexit"
$(MAKE) testenv-runcmdJ CMD="request devices fetch-ssh-host-keys"
$(MAKE) testenv-runcmdJ CMD="request devices sync-from"
testenv-test:
@echo "\\n== Running tests"
@echo "Hostname package tests"
$(MAKE) testenv-runcmdJ CMD="show configuration devices device SW32 config hostname" | grep -c "SW32.nso300.local" | grep 0
$(MAKE) testenv-runcmdJ CMD="configure \\nset hostname SW32 hostname SW32.nso300.local \\ncommit"
$(MAKE) testenv-runcmdJ CMD="show configuration devices device SW32 config hostname" | grep "SW32.nso300.local"
**$(MAKE) testenv-runcmdJ CMD="configure \\ndelete hostname SW32 \\ncommit"**
**$(MAKE) testenv-runcmdJ CMD="show configuration devices device SW32 config hostname" | grep -c "SW32.nso300.local" | grep 0**
**@echo "Hostname package test completed!"**
Step 5: Save the file and exit the file editor.
Step 6: Run the tests by using the make testenv-test command. Verify that the tests have successfully completed. If any of the intermediate commands fail, the execution of Makefile will stop at that point.
Note: This is the simplest form of tests you can implement with NSO in Docker. For more advanced tests and more complex services, consider creating a sandbox environment with actual virtual network devices that support end-to-end tests.
rst@rst:~/NSO/nso300$ **make testenv-test**
== Running tests
Hostname package tests
make testenv-runcmdJ CMD="show configuration devices device SW32 config hostname" | grep -c "SW32.nso300.local" | grep 0
0
make testenv-runcmdJ CMD="configure \\nset hostname SW32 hostname SW32.nso300.local \\ncommit"
make\[1\]: Entering directory '/home/rst/NSO/nso300'
docker exec -t testenv-nso300-5.3-rst-nso bash -lc 'echo -e "configure \\nset hostname SW32 hostname SW32.nso300.local \\ncommit" | ncs\_cli --noninteractive --stop-on-error -Ju admin'
Commit complete.
make\[1\]: Leaving directory '/home/rst/NSO/nso300'
make testenv-runcmdJ CMD="show configuration devices device SW32 config hostname" | grep "SW32.nso300.local"
hostname SW32.nso300.local;
make testenv-runcmdJ CMD="configure \\ndelete hostname SW32 \\ncommit"
make\[1\]: Entering directory '/home/rst/NSO/nso300'
docker exec -t testenv-nso300-5.3-rst-nso bash -lc 'echo -e "configure \\ndelete hostname SW32 \\ncommit" | ncs\_cli --noninteractive --stop-on-error -Ju admin'
Commit complete.
make\[1\]: Leaving directory '/home/rst/NSO/nso300'
make testenv-runcmdJ CMD="show configuration devices device SW32 config hostname" | grep -c "SW32.nso300.local" | grep 0
0
Hostname package test completed!
rst@rst:~/NSO/nso300$
You have successfully designed and executed automated tests for an NSO service using NSO in Docker