Archive for the ‘security’ Category

How to decide if a website is trustworthy

Monday, September 24th, 2012

Occasionally people will send me an email they have received or a link to a website they've heard about and ask me if it's genuine or a scam. Usually it's easy to tell, but sometimes (even for the savvy) it's actually quite hard. The example that prompted this entry was a program that purported to speed up your computer by cleaning up orphaned temporary files and registry entries. This is an area that's ripe for scams - a program that does absolutely nothing could still seem to be effectual through the placebo effect. Also, running such a program is a vector by which all manner of nasty things could be installed. Yet there are genuine programs which do this sort of thing, and slowdown due to a massive (and massively fragmented) temporary directory is certainly possible.

Here are some methods one can use to try to figure out if something like this is trustworthy or not:

  • Trust network. Is it trusted by people you trust to be honest and knowledgeable about such things? I've never used CCleaner myself (I just clean up manually) but people I trust (and know to be knowledgeable about such things) say it's genuine. Similarly, think about how you came to find out about a program. If it was via an advert then that lends no credence (scammers can place adverts quite easily). If it was via a review in a trustworthy publication, that's does lend some credence.
  • Do you understand the business model? CCleaner's is quite clear (a functional free program with paid support). The program that prompted this entry had a free version which just detected problems - fixing the problems required buying the full version. This business model seems just like "scareware" - the free program always finds hundreds of problems (even on a perfectly clean system) because its purpose is to convince people to buy the full version. Being honest would be a disadvantage! Even if the program starts out honest, there's a tremendous incentive to gradually become less honest over time.
  • Does it seem too good to be true? If so, it almost certainly is. (Though exceptions exist.)
  • Is there a way to verify it? Availability of source code is certainly a good sign - it's something genuine programs can do demonstrate their honesty. A scam almost certainly wouldn't bother, because anyone who could examine the source code would not be taken in by it anyway. Though of course, once this starts being a factor a lot of people look for, it'll start being gamed. As far as I can tell, that hasn't happened at the time of writing, though. I think I would have heard about it if it had.
  • What does the internet say about it? Especially known-trustworthy sites that the scammer has no control over. Remember that scammers can put up their own good reviews, but getting bad ones taken down is much more difficult. So if there's a lot of posts in different places by different people saying that it's a scam, that's a pretty good signal that it's bad (not infallible though - someone might have a grudge against the authors of the program for reasons unrelated to whether it does what it's supposed to).

Spam for buying instead of selling

Wednesday, September 19th, 2012

Most unsolicited commercial email tries to get its readers to buy something, but recently I had some spam that was the other way around - they wanted to give me money! Specifically, they wanted to buy ad space on one of my sites. Now, given that I don't have any costs involved in running the site in question (the hosting and domain name were donated), I don't feel I have the right to put ads on the site. I wouldn't want to anyway - it's a better site for not having ads, and it's not like the amount of money involved is likely to make the slightest bit of difference to the quality of my life. For the same reasons I don't have ads on this site (maybe if I could make enough money from the ads to give up my day job it would be a different story but none of my websites has anywhere near that amount of traffic!)

Even so, it took me a moment to decide whether to reply with a polite "no thank you" or to press the "report as spam" button. In the end I decided on the latter course of action - it's still unsolicited commercial email even if it's buying instead of selling. And one of the last things the internet needs is more advertising. It's kind of interesting though that advertisers (or ad brokers at least) are starting to court webmasters - it always seemed to me that the supply of advertising space on the internet vastly outstripped the demand, but maybe that's changing.

Similarly, my web host recently sent me a coupon for $100 of free Google ads - I have no use for them myself (I don't think anyone interested in what I have to say here is going to find it via an ad) but I hope I'll be able to donate them to someone who does have a use for them.

Preventing cheating in online games

Saturday, August 20th, 2011

In computer security, there is a general rule which says that you should never trust anything sent to your server by client software running on a user's machine. No matter how many cryptographic checks and anti-tampering mechanisms you put into your code, you can never be sure that it's not running on an emulated machine over which the user has complete control, and any bits could be changed at any time to give the server an answer it accepts.

This a problem for online gaming, though, as cheaters can give themselves all sorts of capabilities that the game designer did not plan for. This (apparently - I am not much of a gamer) reduces the enjoyment of non-cheating players.

However, games do have one advantage here - they generally push the hardware to (something approximating) its limits, which means that running the entire game under emulation may not be possible.

So, what games can do is have the server transmit a small piece of code to the client which runs in the same process as the game, performs various checks and sends the results to the server so it can determine if the user is cheating or not. The Cisco Secure Desktop VPN software apparently uses this technique (which is how I came to think about it). I have heard this small piece of code referred to as a "trojan" in this context, although this terminology seems misleading because this particular kind of trojan doesn't run without the users knowledge and consent, and is only malicious in the sense that it doesn't trust the user (the same sort of maliciousness as DRM, which is not quite as bad as illegal malware).

The trojan for an online game could send things which are very computationally intensive to compute (such as the results of the GPU's rendering of the game). Because the server can keep track of time, doing these computations in anything less than real time would not suffice. To avoid too much load on the server, the computations would have to be things that are easier to verify correct than to compute in the first place (otherwise the server farm would need to have a gaming-class computer for every player, just to verify the results). And to avoid too much load on the client, it should be something that the game was going to compute anyway. I'm not quite sure how to reconcile these two requirements, but I think it should be possible.

The system should be tuned such that the fastest generally available computer would not be powerful enough to emulate the slowest computer that would be allowed to run the game. Depending on the pace of progress of computer technology and the lifespan of the game, it might eventually be necessary to change these requirements and force the users of the slowest computers to upgrade their hardware if they want to continue playing the game. While this would be frustrating for these players, I don't have a problem with it as long as there is a contract between the players and the game company that both agree to and are bound by - it would be part of the cost of playing without cheaters. Though I would hope that independent servers without these restrictions would also be available if there is demand for them.

Desktop security silver bullet

Wednesday, September 10th, 2008

Suppose there was a way to write a desktop operating system in such a way that no malware (defined as any software running on a user's system that puts the interests of its authors before the interests of the user) could have any ill-effects? Would it be implemented? Probably in open-source, but I don't think Apple or Microsoft would include such technologies. Why? Because they wish to put their interests ahead of their users, running code on customers machines which implements such malware as copy-prevention (DRM), anti-cheating mechanisms in games and debugger detectors. Such a security system would make it very easy to work around such bad behaviour (it's always possible to work around it, but currently not always easy).

If such a security mechanism is possible, I think the way to do it would be through API subversion. When process A starts process B, it can ask the OS to redirect any system calls that process B makes to process A, which can then do what it likes with them - anything from passing them on to the OS unchanged to pretending to be the OS to process B. Since malware (on a well-designed OS) will need to use system calls to find out anything about its environment, it is impossible for process B to tell whether it has been subverted or not. Any filing system operation can be sandboxed to make process B think it has changed critical system files. Even the clock APIs can be subverted to make process B think it is running as fast as it should be.

Once this infrastructure is in place, you can sandbox (say) your web browser to be able to make no changes outside its cache, cookies and downloads directories. Any untrusted processes can do essentially nothing to the system without being given permission by the user, but they can't even tell whether that permission has been given or not, so it makes no sense for them to even ask for it. This essentially solves the "dancing bunnies" problem - any requests by malware to have extended capabilities can (and most likely will) be ignored, since the user would have to go through extra steps to do so, and these steps in no way make dancing bunnies any more likely to appear.

One problem with this scheme is that the time it takes to do a system call is multiplied by the number of subversion layers it goes through. One can't use in-process calls because then the malware would be able to examine and modify the system calls by reading and writing its own process memory. So one would need to use the techniques described here.

Plug-ins should not be in the same address space as the main process

Friday, June 27th, 2008

After some thought I have come to the conclusion that a software architecture which allows third-party code to run in the same process as the main program is a bad idea.

The first problem with such an architecture is reliability - how do you protect yourself against bugs in the third-party code? The short answer is that you can't - the plugin has the ability to stomp all over your process's invariants. The best you can do is terminate the process when such a situation is detected.

The second problem is that a compiler can't reason about the third-party code (it might not even exist at compile time). This means that there are all kinds of checks and optimizations that cannot be performed (like some of the stack tricks I mentioned a while ago).

The third problem is that one cannot tell what code the plug-in will rely on - if it sticks to documented interfaces that's okay, but (either deliberately or accidentally) a plug-in might rely on undocumented behavior which causes the plug-in to break if the program is updated. This forces the program to have ugly kludges to keep old plug-ins working.

If plug-ins are run as separate processes communicating with the main program via well-defined IPC protocols, all these problems are solved. The down-side is that these protocols are likely to be a little more difficult to code against and (depending on the efficiency of the operating system's IPC implementation) may also be significantly slower. The speed problem seems unlikely to be insurmountable though - current OSes can play back HD video which involves the uncompressed video stream crossing the user/kernel boundary - few applications are likely to need IPC bandwidth greater than that.

I want to run as root

Wednesday, June 25th, 2008

With all the hype about Vista's User Access Control functionality, an important point seems to have gone largely unsaid. UAC protects the integrity of the operating system, but does nothing for the user's files. If I run some bad software in a non-user account, it can still read my financial documents, delete my photos etc. These things are much more difficult to fix than the problems UAC does prevent (which can be fixed by reinstalling the operating system).

The trend towards more Unix-like operating system structure annoys me somewhat. I want to run as root/admin all the time. If ask the for some critical system files, the operating system shouldn't second guess me, it should just do what I asked. I have been running Windows systems as administrator for years and it has never been a problem for me in practice. I don't ever want to have to input my password for machines that I own that I'm sitting in front of (remote access is different).

I think a better security model for a single-user machine would be not to authorize individual commands but to authorize programs. When a piece of software is downloaded from the internet and run, the OS calls it makes should be sandboxed. If it attempts to modify the system directories the OS should fake it so the system directories are not modified but so that it looks to that application like they have been. Private user files should just not appear to be present at all (not even findable but read-locked).

Vista is actually capable of this to some extent but it is only used as a last resort, to enable legacy programs to run. Applications released since Vista tend to have manifests which allow them to fail instead of get lied to - I don't think a program should even have the ability to tell that it is being lied to - if I want to lie to my media player and tell it that the sound output is going to a speaker when in fact it is going to a file, I should be able to do that. This is similar (but not quite the same) as a chroot jail in Unix, though chroot is not intended to be used as a sandbox for untrusted applications.

I suppose that, having said all that, what Microsoft have done in Vista does make sense for them - in the long run it will probably reduce support calls and promote development of software that requires only the minimum of privileges to run. I just wish I could turn it off completely without the annoying side-effects.

Tainted files

Monday, May 14th, 2007

A lot of work has gone into making Windows in general and Internet Explorer specifically more secure over the past few years. One of my favorite Windows security features is that when you download a file with IE and save it to disk, it is marked with some extra metadata saying "hey, this file is not to be trusted". Then, when you later forgot where the file came from and try to run it a big scary warning pops up saying "you might not want to run this unless you're sure you it's safe". It doesn't prevent you from doing anything like some security features, it's just a perfectly sensible tainting system.

It's a shame that this technique hasn't been generalized to files that come from other untrusted sources like CDs, floppy disks, USB keys, etc. A lot of attacks could be mitigated that way.

Security requires the right mindset

Thursday, March 9th, 2006

A friend of mine at Microsoft told me this story about his manager, who is a very smart guy but apparently doesn't have the right mindset to be writing software that doesn't have security holes. The other day my friend and his manager were in their offices (just across the corridor from each other). The manager was making a phonecall. To his bank. On speakerphone. With the door open. To verify his identity, he had to key in his social security number. This number was then repeated by the electronic voice on the other end of the line for our entire corridor to hear. D'oh. To make matters worse, he continued the entire phonecall on speakerphone (with the door open).

Security hole

Tuesday, March 7th, 2006

At work today, I had a security hole to investigate. Say what you like about Microsoft, but you have to admit that in recent years they have really turned things around on the front of taking security seriously. It was interesting to get to experience this from the inside today.

As it turned out, all the Microsoft Security Response Center wanted to know was if this bug (which is known to affect certain no-longer-supported parts of Visual Studio 6) also affected the later versions (Visual Studio .NET, Visual Studio .NET 2003 and Visual Studio 2005). Rather than just testing the exploit against these later versions, I insisted on debugging through the VS6 code to find the faulty code and then making sure that this code was fixed in the later versions.

It's quite a strange experience to be debugging through such old code (some of it at least a decade old) especially when I work every day on the code that is descended from it. It's kind of like going back in time and meeting your ancestors. There are some strangely familiar things there but it is very much code from another time. It's also much simpler with 10 years fewer layers of features and added special cases. I was surprised at how easy the bug was to track down.

It was also a relief that, when I found the right place, the bug was completely obvious to any (modern day) programmer looking at the routine in question. Rather than being some subtle and hard-to-spot side-effect of a rare interaction between unrelated parts, the faulty code was doing all the things you're not supposed to do, like allocating a fixed-length buffer on the stack and concatenating C-style strings without any size checks. Such code could never get into the product with the processes we have in place now.

Amazingly, the function with the fault still exists in the codebase today, though the buffer overrun was fixed a long time ago (sometime before November 2000, when the code was moved to the version control system we currently use). It's good to know that when issues like this come up we can track them down quickly even in ancient code, and that our processes work.