Creating a Juniper vEX VM for Libvirt/vagrant
- 4 minutes read - 828 wordsI needed to do some testing of Juniper Contrail API calls, and for that I could use some Juniper switches controlled by Contrail. As Juniper has recently launched a virtual vEX image, I figured I’d give it a go. Below are build instructions for creating a vagrant box for Juniper vEX.
Tools
- Vagrant: Vagrant from Hashicorp. Vagrant is a tool for building and managing virtual machine environments in a single workflow.
- vEX: The Juniper vEX VM images. Note I’m using a KVM (qcow2) image, and that a support login is required to download the images.
Workflow
- Download a qcow2 version of the Juniper vEX VM
I am using version 23.1R1.8 for this example. The filename I downloaded was named vjunos-switch-23.1R1.8.qcow2
.
- Boot the VM manually once to bootstrap configure it to run in a vagrant environment:
I used the the following script to do this:
virt-install \
--connect=qemu:///system \
--name=vex \
--cpu=host \
--vcpus=4 \
--os-variant=freebsd10.0 \
--hvm \
--arch=x86_64 \
--ram 8192 \
--disk path=vjunos-switch-23.1R1.8.qcow2,size=4,device=disk,bus=virtio,format=qcow2 \
--boot hd \
--network=network:vagrant-libvirt,model=virtio \
--graphics none \
--import
- You will see the VM boot. Note it takes at about 3-5 minutes to boot up:
Starting install...
Connected to domain vex
Escape character is ^]
[ 0.000000] Linux version 4.8.28-WR9.0.0.20_standard (builder@svl-jre-ubm1601) (gcc version 6.2.0 (GCC) ) #1 SMP PREEMPT Tue Jan 10 05:16:50 PST 2023
[ 0.000000] Command line: BOOT_IMAGE=vmlinuz rootwait max_loop=255 rw clock=pit console=tty0 console=ttyS0,115200 root=PARTUUID=eddefbdd-02 altroot=/dev/vda2 isolcpus=2
- Login and start the JunOS CLI
login: root
<NO PASSWORD NEEDED>
root@:~ # cli
- Configure the VM for a vagrant / libvirt environment
The following changes need to be made to support vagrant:
- Configure fxp0 to learn it’s IP address via DHCP
- Configure a vagrant admin account, and allow login via the vagrant insecure SSH key
- Configure a DNS server
In addition, in my vagrant environment I’m using the second ethernet port of each network device to connect to a simulated out-of-band network for configuration via ansible. To accomplish this, the following additional changes will be made to the boostrap config:
- Configure ge-0/0/0.0 to learn it’s IP via DHCP
- Allow ping and ssh (and all management services) on ge-0/0/0.0
The following is the complete config snippet to paste in to the VM:
configure
delete chassis auto-image-upgrade
set chassis fpc 0 pic 0 number-of-ports 10
set system host-name vex
set system root-authentication encrypted-password "$6$vFESJ0ew$BBKV6k9SmnCZqcTNXo3Ybuj/Zu/3XGaQfGErq4x4bp9j.nWdJesHOC3jMKIKIGd6ruyENEVAofGkk0QHazRsd."
set system name-server 1.1.1.1
set system login user vagrant class super-user
set system login user vagrant authentication ssh-rsa "ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA6NF8iallvQVp22WDkTkyrtvp9eWW6A8YVr+kz4TjGYe7gHzIw+niNltGEFHzD8+v1I2YJ6oXevct1YeS0o9HZyN1Q9qgCgzUFtdOKLv6IedplqoPkcmF0aYet2PkEDo3MlTBckFXPITAMzF8dJSIFo9D8HfdOV0IAdx4O7PtixWKn5y2hMNG0zQPyUecp4pzC6kivAIhyfHilFR61RGL+GPXQ2MWZWFYbAGjyiYJnAmCP3NOTd0jMZEnDkbUvxhMmBYSdETk1rRgm+R4LOzFUGaHqHDLKLX+FIPKcF96hrucXzcWyLbIbEgE98OHlnVYCzRdK8jlqm8tehUc9c9WhQ== vagrant insecure public key"
set system services ssh
set interfaces ge-0/0/0 unit 0 family inet dhcp
set interfaces fxp0 unit 0 family inet dhcp
set system root-authentication plain-text-password
Use the password Vagrant
for the root password.
- Shut down the VM
run request system power-off
You’ll see shutdown messages, and you’ll be back to a shell prompt when shutdown is complete:
Domain creation completed.
- Create a metadata file that will be used to create a vagrant box
In the same directory, create a file called metadata.json
with the following contents:
{"provider":"libvirt","format":"qcow2","virtual_size":4}
- Download the box creation script
The good folks who maintain the vagrant-libvirt plugin have a script that can be used to convert qcow2 images to a vagrant box. Download the libvirt conversion script:
curl -O https://raw.githubusercontent.com/vagrant-libvirt/vagrant-libvirt/master/tools/create_box.sh
- Use the previously downloaded script to create a vagrant box
bash ./create_box.sh vjunos-switch-23.1R1.8.qcow2
You’ll see the following output as the script runs:
{32}
==> Creating box, tarring and gzipping
./metadata.json
./Vagrantfile
./box.img
Total bytes written: 4297533440 (4.1GiB, 655MiB/s)
==> vjunos-switch-23.1R1.8.box created
==> You can now add the box:
==> 'vagrant box add vjunos-switch-23.1R1.8.box --name vjunos-switch-23.1R1.8'
- Create a metadata file called
vex.json
so that the vagrant box is added with the correct version number
Note the file path should be updated to the full path name of either your current working directory, or where you copied the box file to.
{
"name": "juniper/vex",
"description": "Juniper vEX container",
"versions": [
{
"version": "23.1R1.8",
"providers": [
{
"name": "libvirt",
"url": "file:///home/username/images/juniper/vex/vjunos-switch-23.1R1.8.box"
}
]
}
]
}
- Add the box to Vagrant
vagrant box add vex.json
The following shows a successful addition of your new vagrant box:
==> box: Loading metadata for box 'vex.json'
box: URL: file:///home/pete/images/juniper/vex/vex.json
==> box: Adding box 'juniper/vex' (v23.1R1.8) for provider: libvirt
box: Unpacking necessary files from: file:///home/pete/images/juniper/vex/vjunos-switch-23.1R1.8.box
==> box: Successfully added box 'juniper/vex' (v23.1R1.8) for 'libvirt'!
You can verify the box is loaded:
vagrant box list
CumulusCommunity/cumulus-vx (libvirt, 5.0.1)
arista/veos (libvirt, 4.27.0F)
cisco/csr1000v (libvirt, 17.03.02)
cisco/iosv (libvirt, 15.6-2-T)
cisco/nxos (libvirt, 9.2.2)
fortinet/fortios (libvirt, 7.0.3)
juniper/vex (libvirt, 23.1R1.8)
juniper/vsrx3 (libvirt, 19.2R1.8)
juniper/vsrx3 (libvirt, 21.3R1.9)
- Run the vEX box
I’m using netlab these days to run vendor images. Here’s an example topology.yml
to spin up a two switch network.
#
# libvirt lab simulating Juniper vEX switches
#
---
defaults:
device: vex
nodes:
r1:
r2:
links:
- r1-r2
- Profit!
netlab up topology.yml
That said, it takes nearly 10 minutes for the vEX to boot, takes 4 cores and 8 GB of memory, so it’s not the most svelte and nimble virtual networking device out there…