What the… : Weirdness/Oddities Encountered in the Past Week(ish) #1

I have decided to start writing down some of the weird networking things I encounter in my day job. I am hoping that it helps others fix issues in their own environments; or at least give you a chuckle. 🙂

/31 Gotchas on Cisco\Viptela Equipment

  • Weirdness: I ran into this last fall & then promptly forgot about it. /31 subnet masks are supported on the Cisco (formerly Viptela) SD-WAN gear and have worked great for me for the past 18 months. Except if you are using vEdge 5000s. vManage will let you configure the interfaces with a /31 and the devices will accept the config, but they will not pass any traffic.
  • Investigation: I haven’t seen this issue on vEdge 100s, 1000s, or any converted ISRs. Just on vEdge 5000s
  • Fix: Changed the subnets to /30s and everything works. I can ping it now, so it has to be rock solid…

Routing issues for specific PI IPv4 address space

  • Weirdness: Thanks to a predecessor in the 90s at $DayJob, we have a good amount of provider independent IPv4 space (especially for a mid-sized company). We ran into an issue where some users trying to access a SaaS application from one location (and one PI block) would get constant timeouts. If those same users used their cell phones, the application responded immediately. To cause further confusion, users at a second location (and second PI block) could access the application without issue.
  • Investigation: Using WinMTR we tracked the path from our location (in the USA) to the SaaS application (hosted in Sweden). It appeared that somewhere in the UK we started having 50-80% packet loss. Performing this same test from the second location (and the adjoining /24 block), we did not see the same issue.
  • Fix (sorta): Using a centralized traffic data policy on our Cisco SD-WAN equipment, we took the SaaS provider’s /17 network and set it to route out of a different Internet circuit at the first location, which was not using our PI address space. As soon as this policy was pushed to our vSmart(s) and vEdge(s), the webpage started responding immediately.
vManage Data Policy Rule
vManage Data Policy Rule

IP Phones dropping out

  • Weirdness: Two call center employees reported that the screens on their Cisco IP Phone went dark momentarily and they were kicked out of Cisco Finesse multiple times over a 2 hour period.
  • Investigation: Looking at the switch logs, the two associated switch ports did not log any up/down events, nor did they log a PoE removal/granted. Also, the two switch ports were on separate switches in the switch stack. Talking to the users, the phones were not going through a full reboot process. The screens were just going dark and then coming back. For one user, we replaced their patch cable & then complete phone, but they still had the issue. But, while going through the switch logs, we came across another switch port; again, on an entirely different switch in the stack; that was logging frequent PoE events.
14:37:12.163 UTC: %ILPOWER-5-POWER_GRANTED: Interface Gi3/0/27: Power granted (3750SWITCH-3)
14:37:12.691 UTC: %ILPOWER-5-IEEE_DISCONNECT: Interface Gi3/0/27: PD removed (3750SWITCH-3)
14:37:28.778 UTC: %ILPOWER-5-IEEE_DISCONNECT: Interface Gi3/0/27: PD removed (3750SWITCH-3)
14:37:45.254 UTC: %ILPOWER-5-IEEE_DISCONNECT: Interface Gi3/0/27: PD removed (3750SWITCH-3)
14:38:01.553 UTC: %ILPOWER-5-POWER_GRANTED: Interface Gi3/0/27: Power granted (3750SWITCH-3)
14:38:01.905 UTC: %ILPOWER-5-IEEE_DISCONNECT: Interface Gi3/0/27: PD removed (3750SWITCH-3)
14:38:18.462 UTC: %ILPOWER-5-IEEE_DISCONNECT: Interface Gi3/0/27: PD removed (3750SWITCH-3)
14:38:34.702 UTC: %ILPOWER-5-POWER_GRANTED: Interface Gi3/0/27: Power granted (3750SWITCH-3)
14:38:34.828 UTC: %ILPOWER-5-IEEE_DISCONNECT: Interface Gi3/0/27: PD removed (3750SWITCH-3)
14:38:50.791 UTC: %ILPOWER-5-IEEE_DISCONNECT: Interface Gi3/0/27: PD removed (3750SWITCH-3)
  • Fix: We went to the desk that was logging the PoE events & there we found that the user had NO IP PHONE (DUN, DUN, DUUUUNNN). They were using a softphone on their PC and their PC was plugged into the logged port. They were also using a 25 foot patch cable to plug their PC in the 5 feet they needed to reach the wall port. We replaced the cable with a 7 foot patch cable and the switch stopped logging the PoE events & the other users stopped reporting the issues with their phones.

Viptela 18.4/Cisco IOS-XE SD-WAN 16.10 Released

Happy New Year!!!

A little news that was missed in the pre-holiday change freeze was that Cisco released a new version of their SD-WAN software.

Version 18.4 for the vManage/vBond/vSmart & vEdge devices and the corresponding IOS-XE SD-WAN version 16.10 was released on December 20th, 2018. This is a “short-term” support release that greatly expands the SD-WAN support on Cisco ISR & ASR hardware, along with a bunch of new security features.

Up until this point, if you wanted to run the SD-WAN IOS image on your existing/new ISR & ASRs you were limited to:

  • ASR 1001-X & HX
  • ASR 1002-X & HX
  • ISR 1111-8P and the LTE EA & LA variants
  • ISR 1117-4P LTE EA & LA variants
  • ISR 4221, 4321, 4331, 4351
  • ENCS 5412 running ISRv

Highlights in 18.4/16.10

Now with 18.4/16.10 IOS-XE support expands to:

  • CSR 1000v (Yay!)
  • Nearly all of the rest of the ISR 1100s (BTW, do we really need all of these different SKUs???)
    • 1101-4P
    • 1111-4P and the LTE EA & LA variants
    • 1116-4P and the LTE EA variant
    • 1117-4PM & 1117-4P MLTE EA
    • 1111X-8P
    • 1111-8PWx with integrated WiFi
    • 1111-8PLTEEAWx with integrated WiFi and LTE
  • ENCS 5104, 5406, 5408

Addtional Software features

The most exciting feature for me is that this release also adds some of the road-mapped security features that Cisco announced at Networking Field Day 19 (#NFD19 YouTube recording) including firewall, IPS, and OpenDNS Umbrella support.

Second, it also adds additional IPv6/Dual-Stack support on the service side of the SD-WAN. Previously, IPv6 support was limited to the WAN side of the platform. Unfortunately, this only applies to the IOS-XE platforms, not the Viptela vEdges. I reached out to Viptela vTAC to see if full IPv6 support was slated for the vEdges but was informed that it is NOT currently road-mapped to be ported over. (I have an upcoming rant on that one.)

As always, there are a bunch of other features in this release, but hit up the release notes for more details.

Closing Thoughts

My personal warning is that 18.4 is a short-term support release only & you cannot downgrade your vSmart/vBond/vManage back to a previous release if it is unstable.

I’m currently weighing out the positives & negatives of upgrading to this release.

I do have a couple of spare ISR 1111-8PWBs on my desk…
I could also spin up some CSR 1000v instances for testing…
I wonder if my Cisco/Viptela SEs can float me some temporary licenses…
😉

Runt Frame – Firepower Quick Tip – Management Interface & SNMP/Syslog

Runt frames are going to be some quick tips that I run into in my day to day life as a network engineer.

So, lets say that your preparing to migrate your firewalls to some shiny new ASAs or Firepowers running FTD mode, even though the Internet has tried to warn you off… (Reddit – Firepower Rant Part 1 & Reddit – Firepower Rant Part 2)

As part of your initial setup, you start to configure SNMP & Syslog, but to your horror you find that the system does not allow you to source the traffic from the management interface! It wants you to use a standard data interface, but you can’t activate any of those until you’re ready to complete the migration!

There is a workaround. But it’s not the cleanest.

You can use the “diagnostic” interface. This is a logical interface that shares the the physical management interface (at least on the ASA 5500-Xs). So put an IP address on the diagnostic interface (must be the same subnet used on the management interface), and then manually add the diagnostic interface to the SNMP settings under Platform Settings in FMC.

“But Justin,” you say, “It still doesn’t work!”

Yep. I ran into that myself. The diagnostic interface doesn’t utilize the default gateway that is configured on the management interface. You have to manually add routes for traffic from the diagnostic interface to your SNMP management stations. You can repeat this process if you want to do the same for Syslog traffic.


References

Wow… Someone might see this

So… If you are actually reading this (and I’m sorry), this network engineer successfully got WordPress working. 🙂

I’m starting this blog to document my thoughts on… something. Probably mostly enterprise networking & security related. Maybe some virtualization & server stuff too. But if I’m being honest, I’m also trying to document myself and my journey to the next level in my IT career. Whether that is a transition into something like Site Reliability Engineering (SRE) or Network Reliability Engineering (NRE) or something DevOpsy or whatever tomorrow will bring, my goal is to document my thoughts, trials, tribulations, failures, and successes here.

Soon (hopefully) there will be some real content here and this post will be lost to the wayback machine.