My blogs are traditionally technology focused, but there’s another side of my life outside of IT where I’m active in pet rescue (fostering, volunteering at clinics, etc.) and I felt I needed to share this story. I just returned from spending three days on the Red Lake Reservation in Northern Minnesota and just hours after some of us departed a newborn puppy was brought to us by an incredibly kind woman. The story of a puppy named Chance (in the words of the eloquent Kim).
It wasn’t his fault his family abandoned his momma and her babies.
It wasn’t his fault his momma didn’t receive proper nutrition.
It wasn’t his fault he was the runt.
It wasn’t his fault that winter came early to the rez.
It wasn’t chance that today, a kind woman scooped him up after his momma and littermates took off in search of food.
It wasn’t chance we were on the rez running our last spay/neuter clinic of the year.
It wasn’t chance the kind lady brought him to us.
It wasn’t chance that Dr. Kim was still on-site.
He was weak, dehydrated, and starving, but he was holding on.
I held out hope that there was a chance we could help him pull through.
I wrapped him in a warm blanket and held him, told him he was loved.
I agreed with the vet that he wouldn’t be able to survive the 5 hour trip to the metro.
I held him and gave him what every puppy deserves. I gave him unconditional love and a name. His name was Chance.
Chance crossed over being loved. He crossed over with someone crying about his far too brief time on this planet. Chance crossed over today, and he will be missed.
This is the heartbreak of rescue. This is the cost of falling in love a thousand times. This is the cost that rescue warriors pay. Doing what we can, one dog at a time trying to make the world better and more safe for canine companions.
Chance, you were loved, and you will be missed. I and others in rescue will continue in your memory. Celebrating our victories and mourning our losses.
Run free little man, no more hunger and no more pain, only joy for you little one, only joy.
I recently experienced an unexpected power down of my IronPort Email Security Appliance which led to the appliance generating an hourly email containing the following:
The Critical message is:
An application fault occurred: ('aggregator/master_aggregator.py _process_export_files|605', "<class 'reporting.aggregator.master_aggregator.ExportProcessError'>", '', '[aggregator/master_aggregator.py main|272] [aggregator/master_aggregator.py watch_incoming_queue|401] [aggregator/master_aggregator.py _process_export_files|605]')
After doing some investigating I noticed that I also wasn’t seeing any of the normal reporting data in the web interface on the appliance. The resolution ended up being quite simple in my case. Here are the steps to delete the reporting database (all your reporting data will be deleted, you’ve been warned).
- SSH to the IronPort appliance
- Enter diagnostic mode by typing “diagnostic”
- Then enter reporting mode by typing “reporting”
- Finally type “deletedb”
- You’ll be prompted to confirm that you do indeed wish to delete the reporting database
This process took a few minutes on my lab system and then reporting started working again.
Here’s what the process looked like:
Choose the operation you want to perform:
- RAID - Disk Verify Utility.
- DISK_USAGE - Check Disk Usage.
- NETWORK - Network Utilities.
- REPORTING - Reporting Utilities.
- TRACKING - Tracking Utilities.
- RELOAD - Reset configuration to the initial manufacturer values.
- SERVICES - Service Utilities.
The reporting system is currently enabled.
Choose the operation you want to perform:
- DELETEDB - Reinitialize the reporting database.
- DISABLE - Disable the reporting system.
This command will delete all reporting data and cannot be aborted. In some instances it may take several minutes to complete. Please do not attempt a system restart until the command has
returned. Are you sure you want to continue? [N]>Y
Reseting reporting data......
The reporting system is currently enabled.
If you’re reading this I’m guessing you already know what NTP (Network Time Protocol) is, but as a quick refresher, it’s a simple network protocol to sync time of a device to a reference clock.
I’ve been a huge fan of the NTP Pool Project offering anyone including network operators, end users, and even device manufacturers the ability to leverage a globally distributed and highly resilient NTP time source.
In the past, I’d hosted NTP servers, but in the days of un-patched NTP servers being used for NTP amplification attacks my ISP and I grew tired of constantly chasing down issues and I stopped actively hosting NTP servers as part of the NTP Pool.
I’d always known that the basic way the NTP Pool operated was that you’d point your device at one of their regional NTP references (i.e. 0.pool.ntp.org or a geographically specific entry like 0.north-america.pool.ntp.org) at which point a DNS lookup would be done and an IP address of one of the NTP Pool member servers is returned.
At a small scale, you’d just need a few DNS servers and all would be well, but the NTP Pool processes millions of clients that all issue many DNS queries to find the appropriate name server to sync with. This much DNS traffic requires A LOT of DNS server capacity and that’s where another type of volunteer comes in.
After reading this page I realized I could easily offer up a virtual machine and provide some extra DNS capacity for the greater good. I installed a basic Ubuntu virtual machine, added some firewall rules, and the friendly guys at the NTP Pool Project installed their custom DNS server software and started sending queries my way. They said to expect 3-5 Mbps of DNS traffic on average with occasional spikes above that. DNS queries and responses are very small transactions so 3-5 Mbps of traffic is a TON of DNS traffic and a lot of connections through my internet firewall.
Take a look at the number of connections through my internet firewall before and after I started hosting NTP Pool DNS.
I would highly encourage anyone with the resources to either host an NTP server or an NTP DNS server.
Go forth and sync your devices to a reliable time source. Your log files and sysadmins will thank you.
I have two Window Server 2016 servers configured as a failover pair DHCP servers. Everything had been working fine for more than a year until suddenly clients were not able to get leases and the DHCP scope statistics indicated that the pools had no more addresses to assign. Using a bit of PowerShell
$computername = “server01”
$scopeid = "10.20.0.0”
foreach ($object in Get-DhcpServerv4Lease –ComputerName $computername –ScopeId $scopeid)
if ($object.AddressState –eq 'BAD_ADDRESSES')
I saw the following:
IPAddress ScopeId ClientId HostName AddressState LeaseExpiryTime
--------- ------- -------- -------- ------------ ---------------
10.20.0.101 10.20.0.0 65-00-14-0a BAD_ADDRESS Declined 7/17/2018 4:04:...
10.20.0.107 10.20.0.0 6b-00-14-0a BAD_ADDRESS Declined 7/20/2018 4:16:...
10.20.0.108 10.20.0.0 6c-00-14-0a BAD_ADDRESS Declined 7/18/2018 2:50:...
10.20.0.111 10.20.0.0 6f-00-14-0a BAD_ADDRESS Declined 7/9/2018 4:18:4...
10.20.0.120 10.20.0.0 78-00-14-0a BAD_ADDRESS Declined 7/9/2018 6:30:2...
Edited for brevity. Nearly the entire scope was filled up like this.
After much head scratching, I looked in the Windows Event Viewer on both servers and saw the following error repeatedly logged on one of the servers “The server detected that it is out of time synchronization with partner server: server02.domain.net for failover relationship: SLP-DHCP-Failover. The time is out of sync by: 163 seconds .” This error was logged under the “Applications and Service Logs -> Microsoft -> Windows-DHCP Server -> Microsoft-Windows-DHCP Server Events/Admin”
I checked the clock on the partner server and noticed it was more than four minutes off from the other DHCP server. When looking at the NTP status using the command “w32time /query /status” there wasn’t any NTP server defined! Once I re-issued the “w32tm /resync /rediscover” command it discovered the domain controller and after a bit of time the clocks were in sync and all my DHCP issues were resolved.
It’s finally here!
After many months of hard work following Cisco’s acquisition of Viptela Cisco has released IOS-XE 16.9.1 for the ISR 1000, ISR 4000 and ASR 1000 series platforms supporting Viptela SD-WAN features. Detailed documentation on the process can be found here. There are some caveats to be aware of so be sure to check the link above, but this also brings some long sought after features like T1/E1 support.
In the coming months, the team will be busily working to add existing IOS-XE capabilities to the SD-WAN version of IOS-XE.
With the acquisition of Viptela by Cisco in 2017 I’ve spent quite a bit of time learning about their platform and the various components that comprise their SD-WAN solution. Below is a brief overview of the solution elements and their roles in creating an SD-WAN network.
vBond – Orchestrates control and management plane. vBond provides the first point of authentication (white-list model), facilitates NAT traversal, and distributes a list of vSmarts & vManage to all vEdge routers.
vSmart – vSmart coordinates fabric discovery, distributes control plane information between vEdges, disseminates date plane data plane and application-aware routing policies to the vEdge routers, implements control plane policies (including service chaining, multi-topology, and multi-hop), dramatically reduces control plane complexity.
vEdge – vEdge is a full-featured WAN router supporting VRRP, OSPF, and BGP. vEdge provides a secure data plane between other vEdge routers and establishes secure control plane connections with the vSmart controller. Implements data plane and application-aware routing policies and exports information and statistics. Support for zero-touch provisioning. vEdge is available in both physical and virtual form factors.
vManage – vManage is the management plane for Cisco SD-WAN and acts as the user interface for initial configuring and ongoing maintenance activities. vManage supports multitenancy, centralized provisioning, policies and templates, troubleshooting, monitoring, and software upgrades. vManage provides a rich set of REST and NETCONF APIs.
- Overlay Management Protocol (OMP) – Control plane protocol distributing reachability, security, and policies throughout the fabric
- Transport Locator (TLOC) – Transport attachment point and next hop route attribute
- Color – Control plane tag used for IPSec tunnel establishment logic
- Site ID – Unique per-site numeric identifier used in policy application
- System IP – Unique per-device (vEdge and controllers) IPv4 notation identifier. Also used as Router ID for BGP and OSPF
- Organization Name – Overlay identifier common to all elements of the fabric
- VPN (VRF) – Device-level and network-level segmentation
This is a very basic introduction to the pieces and parts of the solution. I plan to follow this up with additional content on how these pieces work together to provide a flexible architecture allowing for nearly any network topology to be created.
I get asked many times what hardware and software versions are required to integrate with Cisco DNA Center. In addition, there are a variety of capabilities of DNA Center including network provisioning, software management, network visibility, and segment so it can make it challenging to know which network components are supported with which DNA features. Oh and don’t forget there are software version requirments 😉 Here’s a link here that provides this information.
Here’s a secret decoder ring for the part numbers of the 4000 series of the Cisco Integrated Services Routers.
First digit = the family, all are 4
Second digit = the sub-family with 4 (highest performance), 3 (middle performance), and 2 (lowest performance)
The third digit = total number of slots, the sum of NIM and SM
The fourth digit = 1, identifying the first in that series. Allows for incrementing for the subsequent platforms in the series.
Here’s a link to the ISR 4000 model platform comparison
If you want a quick way to test if your DNS filtering is working try accessing www.internetbadguys.com
If this site is being blocked properly you should see something like the following;
Remember that you can get this security filtering for free. Simply point your DNS to 188.8.131.52 and 184.108.40.206.
I was recently helping a client try to change a very basic setting for their remote access VPN users. This setting is known as the “Copyright Panel” and is displayed just below the username and password fields as shown below. The client wanted to simply remove the “Copyright 2018” text completely.
To my surprise this setting is located under the “Clientless SSL VPN Access” section of the “Remote Access VPN” configuration within ASDM. Clientless SSL VPN normally refers to using the browser based VPN portal and not the AnyConnect client. See below for a screen shot of where this setting can be changed.
To find this within ASDM navigate to:
Configuration -> Clientless SSL VPN Access -> Portal -> Customization -> Select the appropriate customization assigned to your VPN connection on the right pane -> Click Edit -> Under Logon Page go to Copyright Panel -> Enable or disable Display copyright panel, and enter your copyright message in the text box.