Teach Yourself OpenDaylight with NETCONF in 15 Minutes

An Overview of OpenDaylight, NETCONF, and Yang

  • OpenDaylight automatically generates a RestConf API for all syntax-valid yang models available to it, regardless of what southbound interface they're connected to.
  • NETCONF is a network element configuration protocol, with many features like rollback-on-fail and automatic yang model uploading (to the ODL controller).
  • Yang is a declarative syntax standard that allows for easy augmentation and interactive RPCs

Together, these are the "easy button" to allow immediate programmability for any fully NETCONF+yang-compliant network element.

 

Prerequisites/what you'll need:

Three ingredients:
  • ODL-powered Lumina SDN Controller (LSC) VM
  • Any NETCONF+yang-certified network element
  • Workstation with the Postman API tool


1. "Easy button" ODL controller:
This is a prebuilt Ubuntu 16.04 QCOW2 VM image.
The LSC packaging of ODL is preinstalled and ready.
Use your cloud software to import this; 16GB RAM, 8 cores/threads, one NIC, decent storage.
[see Appendix for example if needed]

NOTE: If your virtual environment doesn't use cloud-init/cloud-config, then there may be a ~3min delay so that Linux process can time out during first boot (visible on VM console). Then you can "sudo apt-get remove cloud-init" to disable it, and manually configure Ubuntu's /etc/network/interfaces and /etc/resolv.conf to your liking.

Our pre-built image credentials are lumina/Lumina1
ODL default credentials are admin/admin for API & UI

  • Once the VM is running, you can manage ODL with:
    • /opt/lsc/controller/bin/start
    • /opt/lsc/controller/bin/status
    • /opt/lsc/controller/bin/stop
  • You can tune various environment variables & sizing by editing this if needed:
    • /opt/lsc/controller/bin/setenv
  • Tail the logs with:
    • tail -F /opt/lsc/controller/data/log/karaf.log
    • It's normal to leave that running in an interactive lab.
  • Some URLs to verify that the controller is running:

NOTE: Can be slow / totally dynamically built



2. Create a NETCONF+yang-capable network element & network operating system, such as:
Arista EOS, vEOS, or cEOS (latest NETCONF-certified version)
Cisco IOS-XRv or IOS-XR (latest)
Juniper vMX or MX (18.04 or newer)
Nokia SR OS or vSIM (latest)
DaNOS(1908 or newer)
NOTE: There are many others; keeping this list short for convenience.

This network element should be reachable to/from the controller VM.
Create just enough base-config on your network element in order to make it NETCONF-accessible from the controller.

An easy way to verify that your network element is ready for NETCONF+yang is to do the following from the LSC/ODL VM:
ssh -p <port> -s <username>@<router.ip> netconf
This will open a NETCONF session over ssh, and you should see a long NETCONF handshake in the ssh session. CTRL-C to exit this test. Continue working on enabling NETCONF+yang correctly for your network vendor's OS, until you are able to see this “handshake test” appear successfully.
See your vendor's documentation for correctly enabling RFC-compliant & yang-compliant NETCONF.
NOTE: Examples in the Appendix below.

3. Access the controller's RestConf API with Postman
If you don't already have Postman installed: https://www.getpostman.com/downloads/
If you have never used an API tool like Postman, watch the tutorials on their site.

In the global environment in Postman, set the following variables:
- your controller IP
- 8181
- the name of your network element (like the hostname portion of an FQDN)
0 - the IP address of your network element
There will be places in various payloads for your variables
Verify that the controller and admin/admin auth are working:
URL:
http://:8181/restconf/operational/network-topology:network-topology/topology/topology-netconf/
Headers:
Authorization: Basic YWRtaW46YWRtaW4=
Accept: application/xml
Content-Type: application/xml
Body:
None needed for a GET.
NOTE: that auth string is encoded for admin/admin; you can use "Bulk Edit" to paste these into Postman.
Save your API call -- you can duplicate it to quickly create more.

4. Mount your network element
URL:
http://:8181/restconf/config/network-topology:network-topology/topology/topology-netconf/node/
Headers:
(same as above)
Body:
<node xmlns="urn:TBD:params:xml:ns:yang:network-topology">
<node-id></node-id>
<username xmlns="urn:opendaylight:netconf-node-topology">#####</username>
<password xmlns="urn:opendaylight:netconf-node-topology">#####</password>
<host xmlns="urn:opendaylight:netconf-node-topology">0</host>
<schema-cache-directory xmlns="urn:opendaylight:netconf-node-topology"></schema-cache-directory>
<port xmlns="urn:opendaylight:netconf-node-topology">830</port>
<tcp-only xmlns="urn:opendaylight:netconf-node-topology">false</tcp-only>
<schemaless xmlns="urn:opendaylight:netconf-node-topology">false</schemaless>
<max-connection-attempts xmlns="urn:opendaylight:netconf-node-topology">0</max-connection-attempts>
<connection-timeout-millis xmlns="urn:opendaylight:netconf-node-topology">20000</connection-timeout-millis>
<default-request-timeout-millis xmlns="urn:opendaylight:netconf-node-topology">60000</default-request-timeout-millis>
<sleep-factor xmlns="urn:opendaylight:netconf-node-topology">1.1</sleep-factor>
<between-attempts-timeout-millis xmlns="urn:opendaylight:netconf-node-topology">2000</between-attempts-timeout-millis>
<reconnect-on-changed-schema xmlns="urn:opendaylight:netconf-node-topology">false</reconnect-on-changed-schema>
<keepalive-delay xmlns="urn:opendaylight:netconf-node-topology">60</keepalive-delay>
<concurrent-rpc-limit xmlns="urn:opendaylight:netconf-node-topology">0</concurrent-rpc-limit>
<actor-response-wait-time xmlns="urn:opendaylight:netconf-node-topology">600</actor-response-wait-time>
</node>

NOTE: ##### needs to be replaced with your network element login.
NOTE: there are secure RPCs to add netconf nodes with encrypted passwords; see the docs or Support team for details. This classic method is best for lab visibility.

At this point, ODL will automatically:

  • perform the Netconf handshake
  • download all the yang models
  • syntax check them
  • render them into a RestConf API
  • maintain a persistent connection to the device & operational status
  • sync the record you just PUT (and operational status) with any cluster peers


5. You can now use GET/PUT/DELETE to any URLs+payloads available on your network element.
If it's modeled, ODL/LSC can now be used to program it.

For example, Do a GET of the whole config:
URL:
http://:8181/restconf/config/network-topology:network-topology/topology/topology-netconf/node//yang-ext:mount/
...now change the Headers to use JSON, send it again, and you'll get that instead. Your choice.

You can easily extend the URL to specific namespaces and containers to configure finite elements:
(vyatta example) http://:8181/restconf/config/network-topology:network-topology/topology/topology-netconf/node//yang-ext:mount/vyatta-resources-v1:resources
(Arista example) http://:8181/restconf/config/network-topology:network-topology/topology/topology-netconf/node//yang-ext:mount/openconfig-interfaces:interfaces

6. Contact us for more!


APPENDIX:

A. Example native Linux-KVM VM creation using the "virtinst" tool:
virt-install -n LSC-demo --os-type linux --os-variant ubuntu16.04 --cpu host --vcpus=8 -v --memory=16384 --disk path=/#yourpath#/LSC-demo.qcow2,bus=virtio,format=qcow2 --network bridge=br0,model=virtio --graphics vnc,password=abcd1234,listen=0.0.0.0 --noautoconsole --import

B. DIY Alternative for building your own LSC VM from our packages:
A KVM, OpenStack, or Virtualbox hypervisor capable or running a QCOW2 VM image.
Examples:
https://cloud.centos.org/centos/7/images/CentOS-7-x86_64-GenericCloud-1901.qcow2
https://cloud-images.ubuntu.com/releases/xenial/release/ubuntu-16.04-server-cloudimg-amd64-disk1.img

Typical RAM for a "medium" controller is 16GB, 8 cores/threads, and ~25GB storage.
Note: OpenDaylight & LSC can run in any Linux environment, Docker containers, K8S, Swarm, AWS, bare metal, etc. -- we're just picking a common VM for convenience here.

Go to www.luminanetworks.com and register, so you can download your preferred installer package and the formal product documentation.

C. NETCONF mount troubleshooting
A lot of vendors claim full NETCONF+yang support, but in fact haven't thoroughly tested everything nor had it audited. *ALL* vendors go through this process at some point -- it's part of the process of becoming truly open standards-compliant.

Enabling debug logging for NETCONF in ODL/LSC:
DELETE any existing failed/stalled mount point, clean the controller/cache completely, restart the network element, then set karaf to TRACE level:
date #always get a date-stamp for convenience
/opt/lumina/lsc/bin/client
...
log:set TRACE org.opendaylight.netconf
<CTRL-D to exit client>
tail -f /opt/lumina/lsc/controller/data/log/karaf.log
...
(after tests, run client again and change TRACE back to DEFAULT ... afterward, otherwise you're leaving it in TRACE mode for everyone)

D. Variations on popular network elements' base configs:

i. Arista EOS, vEOS, cEOS
Start by configuring a mgmt interface, IP, and user.
management api netconf
transport ssh
management ssh
hostkey server rsa
log-level debug
wr m

ii. JunOS
### Juniper vMX Configuration example:
Create a lumina user with password credentials for NETCONF management:
```Router Shell
configure
set system login user lumina class superuser authentication plain-text-password
> New password: ...
> Retype new password: ...
set system root-authentication plain-text-password
> New password: ...
> Retype new password: ...
show system
commit
# Enable NETCONF ssh access
configure
edit system services
set netconf ssh
set netconf ssh port 830
set netconf rfc-compliant yang-compliant
# optional:
# set system services ssh root-login allow
commit
exit
exit
configure
set system services netconf traceoptions file lumina-netconf-trace.log
set system services netconf traceoptions file size 3m
set system services netconf traceoptions file files 20
set system services netconf traceoptions file world-readable
set system services netconf traceoptions flag all
# In case filtered logging desired:
#set system services netconf traceoptions file match <tagHere: error-message>
commit
exit

iii. IOS-XR
Start by configuring a mgmt interface, IP, and user.
ssh server v2
ssh server netconf port 830
netconf agent tty
netconf-yang agent ssh
Verification:
do show netconf-yang statistics
do show netconf-yang clients

iv. DaNOS/Vyatta
Start by setting up a dataplane interface with an IP, and ensure it's routable.
After that:
set service ‘netconf’
set service ssh port '830'
set service ssh port '22'
set system login user vyatta authentication encrypted-password '********'
set system login user vyatta level 'superuser'
commit

v. Nokia
TBC


Additional Resources:

Lumina SDN Controller