Web::Scrape with XPath and the dangers of Chrome

My first time submitting a question to Stackoverflow – what a great site!

I’d already written the SNMP discovery tool, which I was pretty proud of.  But, there was still some information that was only available by logging into the device’s web interface, use basic auth to get in, navigate to a page, and get 2 strings, then navigate to 5 other pages, getting the same two strings.

Sounds like a job for a script!

Everything was going well, decided to use Web::Scraper, wrote my program, and then … nothing.  It wouldn’t work, and I didn’t know enough about XPATH to even know where to start.  Plus, the Web::Scraper module was a bit scary to read.

Long story short, the XPATH being returned by Chrome was incorrect.  In the developer tools, right click on an element in the elements window and select “copy Xpath”.  What you might not know is that Chrome inserts HTML elements into the DOM that aren’t in the source HTML.  In my case, it was for extra <tbody> elements inside tables.

Oh, and learning the XPATH syntax helped too!

Commented out because… WTF!?

Can anyone tell me why one of our guys commented this out of a 3rd party driver? =)

/* Commented out because… WTF!? – L.D.

if((gcAtlSlots+gcIntlSlots)!= MAX_ATL_NODE)
//Atl+Intl slots = MaxSlots. If user defines wrongly then the error validation is here
{

#undef gcIntlSlots
#define gcIntlSlots (MAX_ATL_NODE -gcAtlSlots)

}

*/

This reminds me of when a lecturer at university suggested that comments in code not be too long, as it increases the size of the compiled binary….

Speaking of backups….

I need to get onto that

Thing is, I have about 65 gig of music, 22gig of photos, 11 gig of code, 8 gig of downloaded software, 2 gig of project documents, 4 gig of documentation, not to mention all the other bits and pieces.

Thats 112 gig, and that doesn't even cover all TV and movies either.

I suppose an external hard drive is the way to go. What software though?