I googled and didn’t find anything useful so I thought I’d share this. If you have a VISA gift card and want to use it on Amazon.com in conjunction with a credit card (to pay for any amount over the gift card value), the trick is to use the VISA gift card to purchase an Amazon gift card of the same value for yourself (search for “gift card” on Amazon.com).
Once you have the Amazon gift card, you can then use it to pay for part of your order, with the remaining balance being charged to another credit card.
I am sure this trick works with most other online merchants too.
On my home network, I have a media file server running Vista 64-bit that serves out music and movies. Since everything is behind a router, I decided to make the shared media folders accessible to anyone on my home network. All you have to do is browse to the media server and start accessing content, without having to log in.
To enable anonymous browsing of a shared folder that is shared by a Vista PC (that is not on a domain), do the following:
Enable the Guest Account:
Run "lusrmgr.msc", select the Users folder.
Right click on the Guest user and choose Properttes.
Uncheck "Account is disabled".
Enable "Public folder sharing" and disable "Password protected sharing":
Choose Start, right click on Network, and choose Properties.
Enable "Public folder sharing".
Disable "Password protected sharing".
For each shared folder:
Grant Everyone Read permission to the Share.
Grant Everyone Read and Execute (NTFS) permission on the shared folder itself.
At home, I occasionally need to print color posters and black and white flyers. I’ve found FedEx Office Online Printing Service to be very convenient for this (if you know exactly what you want… more on that later). After you upload your file in one of the supported formats (Word, PowerPoint, Excel, RTF, Post Script, PDF, Text, JPG), select a paper size, and set print options (color/black and white, copies, collation, paper stock, etc.), the web site gives you a preview of the final print output.
For my last print job, the file I wanted to print was in CorelDRAW format, so all I had to do is go into CorelDRAW and export it to Post Script format. The final print output looked perfect to me.
There is a "minor" problem with the service however: there are no prices to be found on the site anywhere. No, the printing is not free, sorry. You do eventually see the total price when you check out. The only reason I can think of for this strange "price hiding" practice is so that people can’t easily compare online prices vs walk-in prices. They obviously have complete pricing data in the system, because the site does give you a total at checkout. This lack of up-front pricing is a major hassle, especially if you are not sure which options you want (type of paper, etc). You can’t easily/quickly compare the different printing options (and there are tons of them). Changing your order and going through the checkout process just to see the price is too cumbersome.
One has to ask, what were they thinking??? I certainly hope this is not a trend among online stores. And don’t you hate it when you google something (such as "FeDex Kinko’s prices") and the first thing you find is other people also looking for the same info and not finding any :-).
Here are some actual prices I got recently (December 2008) for my local FedEx Kinko’s (Richmond, VA):
– 8×11, B&W, 30% Recycled Paper: 10c/page. – 8×11, B&W, Standard Laser Paper: 12c/page. – 8×11, Color, Standard Laser Paper: 59c/page. – 17×11, Color, Standard Paper, 1.78/page
There is a volume discount when you order more than x copies. It seems that the discount starts at 100 copies.
Against popular wisdom, I decided to upgrade my bedroom home theater PC to Vista 64-bit a couple days ago (just have to make full use of all of my precious 4GB of RAM). Everything is working surprising well so far, with the exception of sound! Whenever I play any audio, my speakers now produce all kinds of pops and crackles along with the normal audio stream. Urgg.
After a couple of days of googling, tweeking various sound settings, uninstalling/reinstalling drivers, etc. without success, I almost gave up on the thing. Then I decided to try just one more thing, changing the default sample rate to “2 channel, 16 bit, 44100 Hz (CD Quality)” from the default “2 channel, 24 bit, 48000Hz” and just like magic, the pops and crackles are gone.
Nice surprise logging into Gmail just now: Themes! I tried out a few and have to say they are nice to look at. I almost forgot I have actual emails to read. I’m going to try out a cheery theme to offset the grim news from Wall Street.
There’s even a Terminal theme for die UNIX shell diehards. It’s actually kind of cool… if only for a few minutes.
Greetings visitor from the year 2020! You can get the latest optimized working source code for this, including a version that does not use unsafe code, from my Github repo here. Thanks for visiting.
Recently I needed a way to find blank images among a large batch of images. I had tens of thousands of images to work with so I came up with this c# function to tell me whether an image is blank.
The basic idea behind this function is that blank images will have highly uniform pixel values throughout the whole image. To measure the degree of uniformity (or variability), the function calculates the standard deviation of all pixel values. An image is determined to be blank if the standard deviation falls below a certain threshold.
Here’s the code. In order to compile, the project to which this code resides must have “Allow Unsafe Code” checked.
public static bool IsBlank(string imageFileName)
double stdDev = GetStdDev(imageFileName);
return stdDev < 100000;
/// Get the standard deviation of pixel values.
/// <param name="imageFileName">Name of the image file.</param>
/// <returns>Standard deviation.</returns>
public static double GetStdDev(string imageFileName)
double total = 0, totalVariance = 0;
int count = 0;
double stdDev = 0;
// First get all the bytes
using (Bitmap b = new Bitmap(imageFileName))
BitmapData bmData = b.LockBits(new Rectangle(0, 0, b.Width, b.Height), ImageLockMode.ReadOnly, b.PixelFormat);
int stride = bmData.Stride;
IntPtr Scan0 = bmData.Scan0;
byte* p = (byte*)(void*)Scan0;
int nOffset = stride - b.Width * 3;
for (int y = 0; y < b.Height; ++y)
for (int x = 0; x < b.Width; ++x)
byte blue = p;
byte green = p;
byte red = p;
int pixelValue = red + green + blue;
total += pixelValue;
double avg = total / count;
totalVariance += Math.Pow(pixelValue - avg, 2);
stdDev = Math.Sqrt(totalVariance / count);
p += 3;
p += nOffset;
Ever since Windows 2000, menu keyboard shortcut characters are not underlined by default. According to Microsoft, the underlined letters are hidden until you press the Alt key. Let’s try that… First, use the mouse to click on the Help menu in Visual Studio:
Now, press Alt to show the underlined letters right? Poof, the menu is gone. Ok, that’s an easy one. I’m sure everyone have figured out that Alt key must be pressed before you access the menu. But can anyone tell me this? How do I show underlined letters for right-click/context menus with the Alt key? Well, the short answer is you can’t! If you don’t believe me, try it yourself. I’ve tried Alt+right-click, Alt then right click, right click then Alt, etc. Nothing works.
The only thing I’ve found to work is the Application key (this is the key with the image of a mouse pointer on a menu, between Alt and Ctrl). Interestingly, the Application key will always show underlined letters regardless of the “hide underlined letters” settings. The keyboard combination Shift-F10 also brings up the context menu, however that keyboard shortcut does not show underlined letters.
You can forget about all of this nonsense and have Windows always show the underlined letters by changing a setting (instructions below are for Windows XP):
Open the Display Control Panel.
Click on the Appearance tab, then Effects…
Uncheck “Hide underlined letters for keyboard navigation until I press the Alt key”.
If you do any web scraping (also known as web data mining, extracting, harvesting), you are probably familiar with the main steps: navigate to page, retrieve HTML, parse HTML, extract desired elements, repeat. I’ve found the SgmlReader library to be very useful for this purpose. SmglReader turns your HTML into XML. Once you have the XML, it’s fairly easy to use built-in classes such as XmlDocument, XmlTextReader, XPathNavigator to parse and extract the data you want.
Now to the labor intensive part: before your program can make sense of the XML, you have to manually analyze the HTML/XML first. Your program won’t know jack about how to extract that stock price until you tell it exactly where the stock price is, typically in the form of an XPath expression. My process of getting that XPath expression goes something like this:
Scroll to/find desired element in the XML editor.
Does element have unique attributes that can be used?
a – If yes, code XPATH statement with filter on attribute value. Example: //Table[@id=”searchResultTable”].
b – If no, code an absolute XPATH expression. Example: /html/body/div/pre/font/table/tr/td/table/tr/td/span.
Step 2b is where it gets very labor intensive and boring, especially for a big web page with many levels of nesting. Visual Studio 2005 XML Editor/Resharper have a couple of features that I find useful for this:
– Visual Studio’s Format Document (Edit/Advanced/Format Document) command formats the XML with nice indentation and makes it a lot easier to look at.
– With Resharper, you can press Ctrl-[ to go to the start of the current element, or if you are already at the start, go to the parent element.
Even with the above tools, it’s still a painful and error-prone exercise. Luckily for us, Firebug has the perfect feature for this: Copy XPath. To use it, open your HTML/XML document, open the Firebug pane (Tools/Firebug/Open Firebug), navigate to the desired element, right click on it and choose “Copy XPath”.
You should now have this XPath expression in the clipboard, ready to be pasted into your web scrapper application: “/html/body/div/table/tr/td/table”.
A feature that I would love to have is the ability to generate an alternate XPath expression using “id” predicates, such as this: “//Table[@id=”searchResultTable”]”. With web pages that are not under your control, you want to minimize the chance that changes on the pages impact your code. Absolute XPath expressions are vulnerable to any kind of changes on the page that change the order and/or nesting of elements. On the other hand, XPath expressions using an “id” predicate are less likely to be impacted by layout changes because in HTML, element IDs are supposed to be unique. No matter where your element is on the page, if it has the same ID, you should still be able to get to it by looking up the ID. Hmm… this sounds like a good idea for a Visual Studio Add-in.