Creating a Juniper SRX VM for Libvirt/vagrant
- 5 minutes read - 1023 wordsI needed to replicate a Juniper SRX firewall to create and test a BGP config for a customer. I could have done this in GNS3/EVE-NG or some other virtual enviornment, but I find that Vagrant/Libvirt gives me the most usability when trying to reuse a setup over time, such as for automated testing.
So with that goal in mind, I needed to create a vagrant box of Juniper’s vSRX firewall, but specifically for use with libvirt. Multiple times I’ve drawn from Brad’s blog which has a great Juniper vMX example. So I’ll take a similar approach.
Tools
- Vagrant: Vagrant from Hashicorp. Vagrant is a tool for building and managing virtual machine environments in a single workflow.
- vSRX: The Juniper vSRX VM images. Note I’m using a KVM (qcow2) image, and that a support login is required to download the images.
Workflow
To setup the Juniper vSRX VM, I’m essentially following Brad’s Arista walkthrough and swapping out relevant parts for Juniper:
- Download a qcow2 version of the Juniper vSRX VM
I am using version 17.3R1.10 for this example. The filename I downloaded was named media-vsrx-vmdisk-17.3R1.10.qcow2
.
- Boot VM manually once to bootstrap configure it to run in a vagrant environment:
I used the graphical Virtual Machine Manager to do this. Note the following settings:
- Name: srx
- 9 vCPUs
- 16G Memory
- i440fx Chipset (MANDATORY!)
- IDE Driver for Disk1
- Add a second NIC
- You will see the VM boot. Note it takes at least 5 minutes or longer to boot up:
Loading /boot/defaults/loader.conf
/kernel text=0xc1fc6c data=0x6bf1c+0x1724c0 syms=[0x4+0xb9b60+0x4+0x116be2]
/boot/modules/libmbpool.ko text=0xcd0 data=0x10c |
/boot/modules/if_em_vsrx.ko text=0x18770 data=0x840+0x1a4 /
/boot/modules/virtio.ko text=0x20cc data=0x204 syms=[0x4+0x7a0+0x4+0x900]
/boot/modules/virtio_pci.ko text=0x2d8c data=0x1fc+0x8 syms=[0x4+0x8a0+0x4+0xaa3]
/boot/modules/virtio_blk.ko text=0x28ac data=0x1ec+0xc syms=[0x4+0x890+0x4+0x906]
/boot/modules/if_vtnet.ko text=0x604c data=0x354+0x10 syms=[0x4+0xcf0+0x4+0xde5]
/boot/modules/pci_hgcomm.ko text=0x1658 data=0x1a8+0x44 syms=[0x4+0x5f0+0x4+0x6d4]
/boot/modules/pvi_db.ko text=0x3080 data=0x31e+0x2e syms=[0x4+0x5d0+0x4+0x56d]
/boot/modules/chassis.ko text=0x974 data=0x1cc+0x10 syms=[0x4+0x370+0x4+0x356]
Hit [Enter] to boot immediately, or space bar for command prompt.
Booting [/kernel]...
...
Sun Sep 5 12:17:07 UTC 2021
vsrx (ttyd0)
login:
- Configure the VM for a vagrant / libvirt environment
The following changes need to be made to support vagrant:
- Configure fxp0 to learn it’s IP address via DHCP
- Configure a vagrant admin account, and allow login via the vagrant insecure SSH key
- Configure a DNS server
In addition, in my vagrant enviornment I’m using the second ethernet port of each network device to connect to a simulated out-of-band network for configuration via ansible. To accomplish this, the following additional changes will be made to the boostrap config:
- Configure ge-0/0/0.0 to learn it’s IP via DHCP
- Allow ping and ssh (and all management services) on ge-0/0/0.0
The following is the complete config snippet to paste in to the VM:
configure
set system host-name vsrx
set system root-authentication encrypted-password "$6$vFESJ0ew$BBKV6k9SmnCZqcTNXo3Ybuj/Zu/3XGaQfGErq4x4bp9j.nWdJesHOC3jMKIKIGd6ruyENEVAofGkk0QHazRsd."
set system name-server 8.8.8.8
set system login user vagrant class super-user
set system login user vagrant authentication ssh-rsa "ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA6NF8iallvQVp22WDkTkyrtvp9eWW6A8YVr+kz4TjGYe7gHzIw+niNltGEFHzD8+v1I2YJ6oXevct1YeS0o9HZyN1Q9qgCgzUFtdOKLv6IedplqoPkcmF0aYet2PkEDo3MlTBckFXPITAMzF8dJSIFo9D8HfdOV0IAdx4O7PtixWKn5y2hMNG0zQPyUecp4pzC6kivAIhyfHilFR61RGL+GPXQ2MWZWFYbAGjyiYJnAmCP3NOTd0jMZEnDkbUvxhMmBYSdETk1rRgm+R4LOzFUGaHqHDLKLX+FIPKcF96hrucXzcWyLbIbEgE98OHlnVYCzRdK8jlqm8tehUc9c9WhQ== vagrant insecure public key"
set system services ssh
set system syslog user * any emergency
set system syslog user * pfe none
set system syslog user * match "!(.*Scheduler Oinker*.|.*Frame 0*.|.*ms without yielding*.)"
set interfaces ge-0/0/0 unit 0 family inet dhcp
set interfaces fxp0 unit 0 family inet dhcp
set system root-authentication plain-text-password
- Shut down the VM
run request system power-off
You’ll see shutdown messages, and you’ll be back to a shell prompt when shutdown is complete:
The system is halted.
Power down.
Domain creation completed.
You can restart your domain by running:
virsh --connect qemu:///system start srx
- Create a metadata file that will be used to create a vagrant box
In the same directory, create a file called metadata.json
with the following contents:
{"provider":"libvirt","format":"qcow2","virtual_size":6}
- Download the box creation script
The good folks who maintain the vagrant-libvirt plugin have a script that can be used to convert qcow2 images to a vagrant box. Download the libvirt conversion script:
curl -O https://raw.githubusercontent.com/vagrant-libvirt/vagrant-libvirt/master/tools/create_box.sh
- Use the previously downloaded script to create a vagrant box
bash ./create_box.sh media-vsrx-vmdisk-17.3R1.10.qcow2
You’ll see the following output as the script runs:
{16}
==> Creating box, tarring and gzipping
./metadata.json
./Vagrantfile
./box.img
Total bytes written: 4068618240 (3.8GiB, 35MiB/s)
==> media-vsrx-vmdisk-17.3R1.10.box created
==> You can now add the box:
==> 'vagrant box add media-vsrx-vmdisk-17.3R1.10.box --name media-vsrx-vmdisk-17.3R1.10'
- Create a metadata file called
srx.json
so that the vagrant box is added with the correct version number
Note the file path should be updated to the full path name of either your current working directory, or where you copied the box file to.
{
"name": "juniper/srx",
"description": "Juniper SRX",
"versions": [
{
"version": "17.3R1.10",
"providers": [
{
"name": "libvirt",
"url": "file:///home/pete/images/juniper-srx/srx.box"
}
]
}
]
}
- Add the box to Vagrant
vagrant box add srx.json
The following shows a successful addition of your new vagrant box:
==> box: Loading metadata for box 'srx.json'
box: URL: file:///home/pete/images/juniper-srx/srx.json
==> box: Adding box 'juniper/srx' (v17.3R1.10) for provider: libvirt
box: Unpacking necessary files from: file:///home/pete/images/juniper-srx/srx.box
==> box: Successfully added box 'juniper/srx' (v17.3R1.10) for 'libvirt'!
You can verify the box is loaded:
vagrant box list
CumulusCommunity/cumulus-vx (libvirt, 3.7.12)
CumulusCommunity/cumulus-vx (libvirt, 4.2.1)
arista/veos (libvirt, 4.25.0F)
fortinet/fortios (libvirt, 6.0.6)
fortinet/fortios (libvirt, 6.4.3)
generic/ubuntu1804 (libvirt, 3.0.34)
generic/ubuntu2004 (libvirt, 3.0.32)
juniper/srx (libvirt, 17.3R1.10)
- Modify your Vagrantfile
Finally, to use your newly-created vSRX vagrant box, add something similar to the following to an existing Vagrantfile
. Note this is one of my working examples, and it’s beyond the scope of this blog post to explain the complexity that exists in vagrant configuration. However feel free to reach out to me if you have questions.
That said, note some key lines needed. This VM won’t work without these settings:
- insert_key = false
- tinycore
- boot_timeout = 2400
- memory = 16384
- cpus = 9
- disk_bus = ‘ide’
- nic_adapter_count = 14
config.vm.define "srx" do |device|
device.ssh.insert_key = false
device.vm.box = "juniper/srx"
device.vm.guest = :tinycore
device.vm.boot_timeout = 2400
device.vm.provider :libvirt do |v|
v.default_prefix = "lab_"
v.memory = 16384
v.cpus = 9
v.disk_bus = 'ide'
v.nic_adapter_count = 14
end
device.vm.synced_folder ".", "/vagrant", disabled: true
device.vm.network "private_network",
:mac => "44:38:39:00:00:5a",
:libvirt__tunnel_type => 'udp',
:libvirt__tunnel_local_port => "#{ 12047 + offset }",
:libvirt__tunnel_port => "#{ 11047 + offset }",
:libvirt__iface_name => 'eth1',
auto_config: false
device.vm.network "private_network",
:mac => "44:38:39:00:01:04",
:libvirt__tunnel_type => 'udp',
:libvirt__tunnel_local_port => "#{ 12136 + offset }",
:libvirt__tunnel_port => "#{ 11136 + offset }",
:libvirt__iface_name => 'eth2',
auto_config: false
end
- Profit!
vagrant up srx