Blog: Networking

What do you get the paranoid schizophrenic who has everything?

An "EnhancedHardDrive" from Ensconce Data Technology, of course! Tired of destroying your hard drives at home the old fashioned way using fire/thermite? How barbaric! How messy! The EDT Enhanced Hard Drive will flood itself with an acid mist using up to 17 remote triggers, rendering the drive forensically unrecoverable. [more]

After talking with a customer about how to dispose of old hard drives, I started doing some research on different data disposal methods and I happened upon EDT's site. I doubt any of use will encounter anything of this level of security, but it seemed interesting so I thought I'd share. It seems like the niche market for a product like this would be pretty small, but I recently read that their sales goal for this next year is somewhere around 25 million. Someone's got something to hide :)

 


 

I recently experience a problem with the RDP client not passing the %clientname% variable when setting up a Thin Client running Win CE 6.0. 

Further investigation shows that the issue of the environment variable %clientname% not being passed through to the Terminal Server does not have anything to do with Win CE 6.0. The issue was caused by the specific build of RDP 6.0 that was included on the image of the new shipment of Thin Clients running Win CE 6.0. The specific build included on this image was RDP 6.0 build 9.  [more]

When I checked, there was a new image available for the HP thin clients from HP’s website. The new image included an updated build (14) of  RDP 6.0. This build does pass through the %clientname% environment variable and allows the scripts to function normally. Build 14 of RDP 6 was not available individually as a download from HP’s site. That means the only option at this time is to reload the entire image on the thin client.

When you download and runn the exe you will be given the option of creating an ISO, copying to a USB drive, or deploying the files to a network. I copied the files to a USB drive and then booted the TC to the USB drive. Note: that the USB copy completely formats your USB flash drive erasing any pre-existing data.


 

I recently started reevaluating how we do port security as a result of a recent customer's information security audit.  We normally turn on port security and set the maximum MAC addresses to 1 (the default) or 2 (if there is an IP phone connected).  The default behavior is to disable the port when the MAC changes or if the number of concurrent MAC’s exceeds the maximum.

However during testing I discovered this didn’t work exactly like I expected.  Port security was enforced as long as a device stayed connected to the port.  If the port was disconnected, the switch would remove the pre-existing MAC’s and ANY new device could connect, as long as the maximum was not exceeded.  While this prevents unauthorized hubs and switches, it doesn’t prevent someone from unplugging a device and plugging in a different unauthorized device.

The solution to this is to use the sticky option on the port security interface command: [more]

  • switchport port-security – enables port security, optional “maximum <n>” to set the max greater than 1
  • switchport port-security mac-address sticky – turns on the sticky MAC feature

After enabling, you will notice the currently connected MAC address(es) will appear in the running config:

  • switchport port-security
  • switchport port-security mac-address sticky
  • switchport port-security mac-address sticky 0080.6433.xxxx

This will stay in the config until the switch is rebooted, so it’s important to write the config.

Other related commands:

  • show port-security address – lists all the learned MAC addresses by interface
  • show port-security interface fa0/1 – shows the detailed port security settings for an interface, including enable/disable status
  • clear port-security sticky interface fa0/1 – clears the learned sticky MAC addresses, must be done prior to a shut/no shut to re-enable a port disabled due to port security

When you use sticky MAC addresses you'll want to make sure that the MAC addresses are cleared off of a switch when a device is moved.  We had a laptop that was moved from one client location to another and one of the distribution switches was thinking the device was plugged into the old switch and the other distribution switch thought it was plugged ito the new switch.  This created a situation where some network traffic was reaching the laptop and some was going into a black hole.  After clearing the the sticky MAC addresses on the old switch the problem was resolved.

Update:  You might also be interested in a couple stick MAC address tips.


 

Incase you didn't know, there is a command line interface for WMI - wmic.

Some documentation is here http://technet.microsoft.com/en-us/library/bb491034.aspx, and http://technet.microsoft.com/en-us/library/bb742610.aspx.

You can write simple scripts to manage just about anything that your might write a short VB program for – printers, accounts, scheduled jobs, processes, etc. It has a lot of aliases you can use that are documented in the online help, but you can use the actual class commands.  If you just enter wmic on the command line and let it prompt, it sets your command window width to 1500 so output from most commands will not wrap.  You can enter /? At any point for help.  Some examples: [more]

  • wmic process get Caption,Commandline,Processid
  • wmic /node:server1 cpu get description, manufacturer, maxclockspeed, revision
  • wmic process where name='iexplore.exe' call terminate
That last one kills all processes running iexplore.exe.  If you just ran that to see what it would do without reading ahead, then chances are you are not reading this right now.

 

 


 

One of our customers was having problems reported by their terminal server users last week where their remote core systems were being very slow and having disconnects.  In the past, they have rebooted routers, terminal server, and their domain controler to get it to working much better but eventually it would slow way down again.  After doing a reboot of the equipment last week, the users again reported that it was very, very, very slow a few days later.  There was a Symantec Premium Anti-spam process running last week that I thought to be the culprit, but it looked to be running okay on the DC this time. [more]

I decided that I had better check the Internet connection speed using a speed test from cnet.com.  Their results were an abysmal 53 kbps (this is in the dialup range folks).  I asked them to call their ISP and get them to check the line.  They found problems with the cable modem (one, that it was very ancient) and replaced it.  After the hardware replacement, things are running much smoother.

In summary: It never hurts to perform an Internet speed test when things are running slowly.


 

Sequential processing got me this week when I was configuring a rule in ISA to allow outbound traffic on port TCP 3000. Traffic kept getting blocked, but I didn’t know why. The rule configuration was correct. After putting a monitor in place, I noticed that the traffic was getting blocked by a rule that was higher up in the rules definitions. This rule was configured with same destination IP and port number. The gotcha is that ISA will match traffic to the first access rule that is processed (in order) from top to bottom. If two rules happen to define the same destination IP and port, the first one is the only one that ever gets processed. ISA considers the first a “match” and never proceeds with processing subsequent rules.


 

For those who like to experiment, there is open-source firmware available for many SOHO routers.  Two popular ones are:

Here’s a partial list of extra features you get with these:

  • Improved QoS capabilities
  • Better bandwidth reporting
  • Ability to increase wireless output power
  • Support for wireless clients and wireless bridging
  • Improved access restriction rules

 

Recently one of our clients was having problems viewing an image that was embedded (not a linked image) into an email. Other recipients of this same email and the image would display correctly. Where the image should have appeared, there was simply an outline of where the image should appear with a red X in the upper left hand corner of the image blank. After checking to make sure the Outlook security settings were configured to display images in emails, I discovered that the little known (and invisible) OutlookSecureTempFolder was ‘full’ and that by emptying it out, images would display correctly in the emails. Here’s the nitty-gritty of what was happening: [more]

When you open attachments/images directly from an email (as opposed to saving the attachments to another location then opening them from that location) within Outlook, a copy is written to a temporary folder referred to as the OutlookSecureTempFolder. This particular user’s folder was ‘full’ (although she still had plenty of disk space.)  The trick is that to regular users this folder is invisible (even if you’ve enabled the “Show Hidden Files and Folders” setting) and its name is randomly generated. In Outlook 2007 that randomly named directory resides by default at:

In Windows XP:
C:\Documents and Settings\user\Local Settings\Temporary Internet Files\Content.Outlook\ XXXXXXXX, where XXXXXXXX can be any random characters.

In Vista:
C:\Users\username\AppData\Local\Microsoft\Windows\Temporary Internet Files\XXXXXXXX,  where XXXXXXXX can be any random characters.

To find (and change if you like) the location of this randomly generated folder path, look in the registry at: HKEY_CURRENT_USER\Software\Microsoft\Office\12.0\Outlook\Security\OutlookSecureTempFolder

Once you find that directory, you can simply type the path directly into Windows  explorer, delete the temp files that are there, and your emails will now begin to display images. Now you can see all of those oh-so-wonderful image-laden forwards that your grandmother sends you!  If you want to bump up your security and avoid this problem at the same time then take a look at our recommendation of automatically deleting Temporary Internet Files when you logoff/shutdown in a previous post.


 

I received a new hard drive recently and was going to use ThinkVantage Rescue and Recovery to restore my current system to the new drive.  During this process I discovered that Rescue and Recovery requires that you restore a backup from the same location where the backups were made. [more] I have my backups configured to go to an external HD (using eSata with PCMCIA card).  The external hard drive is labeled “2nd Hard Drive” in R&R.  When I got the new drive in, I ran a full backup (3-4 hours), then swapped hard drives.  I was able to boot into the R&R environment, but didn’t have drivers for my PCMCIA card so I couldn’t restore using eSata.  My external HD also has connections to allow it to be used as a USB drive.  I was able to boot into R&R off of the external HD over USB, but no backups were showing up on the “USB Drive” (or any other backup locations).  I was afraid R&R was saving the backup location as part of the backup and just couldn’t see my “2nd Hard Drive” backups on a USB drive.  I swapped physical disks again, did another full backup over USB (8-10 hours), swapped disks again, and was able to boot and restore my “USB backup”.  This is just something to consider when your choosing a location to store your Rescue and Recovery backups.


 

I recently moved my laptop backups to an Acomdata external hard drive.  I noticed that it mounted two partitions, a hard drive partition and a CD partition, but did not worry too much about it since I had plenty of disk space on the hard drive partition.  The CD partition was created by the manufacturer to store their disk utilities and, like a normal CD, appeared to my laptop as read-only.  After saving multiple backups to this disk, I received a new internal hard drive and tried to restore from the backups on the external hard drive.  However, I could not boot to the external hard drive because my laptop would only recognize the CD partition during boot, not the hard drive partition.  [more]

After some Googling, I discovered that these CD partitions have caused quite a few issues, including not allowing some Linux distributions to mount the hard drive partition.  The easiest fix at first seemed to be to take the hard drive out of the external casing, connect directly to a desktop PC's internal hard drive controller and re-partition/format the entire drive.  Right before giving in to this solution, I found a blog post on LinkedIn which is no longer available.

This author spent some time with Acomdata support and got them to provide a software tool to remove the CD partition while it is connected externally via USB.  In the end, I moved my data off of the external hard drive, ran the tool, formatted the external hard drive as a single partition, moved my data back, and was able to boot/restore from the external hard drive.  I even have a little bit of extra space now that the CD partition is gone.