Archive for October, 2008

It's hard to buy a wifi card that works with Linux

Friday, October 31st, 2008

I recently reorganized my home wireless network a bit, and the AP that I had been using connected to my Linux box stopped working. I wanted to replace it with an internal card but it's annoyingly difficult to find a wifi card that works well with Linux.

Various chipsets are supported with Free drivers but the trouble is that you can't buy a card by chipset - you have to pick a card, research it to try to figure out what the chipset is and then see if it is supported. Even then there's no guarantee because many manufacturers make several completely different cards with different chipsets and give them the same model number (which kind of defeats the point of a model number if you ask me). And the online shopping places don't tell you the revision number of the card you're buying.

Eventually I gave up trying to find one with Free drivers and settled on this one which people seemed to be having success with. Indeed Ubuntu 8.04 recognized it straight away and connected to my network. Still, it's annoying that it's so difficult to buy a card for which Free drivers exist.

Scanning photos is annoying

Thursday, October 30th, 2008

Since I lost my old digital camera I used a disposable camera and my old compact 35mm camera. This meant that I had a couple of old fashioned printed photos to scan in. It's been so long since I last scanned a batch of photos that I had forgotten what an annoying process it is.

Prints tend to be slightly convex, so when placed on the scanner glass they tend to be in contact with it at only one point. This means that they will rotate about that point at the slightest provocation (like a gentle breeze in the room, or another photo being placed on the glass next to it, or removing one's finger from the photo once one finished placed it just so).

My scanner is just the right size to scan three photos at once, which is great apart from the fact that the sensor area of the scanner is slightly smaller than the glass, and the bottom of the third photo gets cut off. Fortunately it's only a small strip so I decided not to bother rescanning them all.

Then after scanning most of the photos I'll notice that there is a smudge or some dust on the glass which will of course appear in all the photos that I've scanned so far. Hopefully it won't be too noticable.

I am quite impressed at how well the photos came out given that the film in my old camera had been sitting there for the best part of 7 years (I guess it helped that it had been in dark cupboards and drawers for most of that time). I'm also quite impressed at the picture quality you can get from a disposable camera these days - comparable to my old compact 35mm. I guess that even with the rise of digital, 35mm technology has continued to improve. Digital is still so much better though.

I'll post the results of the scanning session here soon.

A stack of refactorings

Wednesday, October 29th, 2008

I'm not sure if this is a bad habit of mine or if other programmers do this too. Sometimes after having partially written a program I'll decide I need to make some change which touches most of the code. So I'll start at the top of the program and work my way down, making that change whereever I see it needed. Partway through doing this, however, I'll notice some other similarly impactful change I want to make. Rather than adding the second refactoring to my TODO list and continuing with the first, I'll go right back up to the top of the program and work my way down again, this time making changes whereever I see either of the refactorings. I reckon I've had about as many as 5 refactorings going on at once sometimes (depending on how you count them - sometimes an earlier refactoring might supercede a later one).

Keeping all these refactorings in my head at once isn't as big a problem as it might sound, since looking at the code will jog my memory about what they are once I come across a site that needs to be changed. And all this reading of the code uncovers lots of bugs.

The downside is that I end up reading (and therefore fixing) the code at the top of my program much more than the code further down.

Chemically controlled radioactivity

Sunday, October 26th, 2008

Most of the properties of materials (including all of chemistry) are modelled by assuming atomic nuclei to be indivisible particles. You can derive almost all chemistry and properties of materials from the laws of quantum mechanics and the masses and charges of electrons and nuclei.

Nuclear physics, on the other hand, is all about the internals of the nuclei and (except under extreme conditions) a nucleus is unaffected by electrons and other nuclei (even in electron capture the particular chemical configuration of an atom doesn't affect its decay rate significantly).

This is useful in some ways - you can replace some nuclei with different isotopes (for example in nuclear medicine) without changing the chemistry. Of course, once it decays (other than gamma decay) the nuclear charge will change and the chemistry will be different.

Suppose that nuclear decay rates did depend significantly on chemical configuration - what would the consequences be? The obvious consequence is that it would be possible to make nuclear weapons that are more difficult to detect, since they could be made non-radioactive until they undergo a chemical reaction.

More subtly, there would be a whole new spectrum of chemical processes made possible by the storage of energy in (and release of energy from) the atomic nuclei. This could lead to new (cleaner, safer) forms of nuclear power, and all sorts of other interesting applications.

This is all wild speculation of course, but this suggests there is still much that we don't understand about atomic decay processes.

Game engines will converge

Saturday, October 25th, 2008

At the moment, just about every computer game published has its own code for rendering graphics, simulating physics and so on. Sometimes this code is at least partially reused from game to game (e.g. Source), but each game still comes with its own tuned and updated version of it.

I think at some point the games industry will reach the point where game engines are independent of the games themselves. In other words, there will be a common "language" that game designers will use to specify how the game will work and to store assets (graphics, models, sound, music etc.) and there will be multiple client programs that can interpret this data and be used to actually play the game. Some engines will naturally be more advanced than others - these may be able to give extra realism to games not specifically written to take advantage of it. And games written for later engines may be able to run on earlier ones with some features switched off.

Many classes of software have evolved this way. For example, in the 80s and early 90s there were many different ways of having rich documents and many such documents came with their own proprietary readers. Nowadays everybody just uses HTML and the documents are almost always independent of the browsers. As far as 2D games are concerned, this convergence is already happening to some extent with flash.

3D games have always pushed hardware to its limits, so the overhead of having a game engine not tuned for a particular game has always been unacceptable. But as computer power increases, this overhead vanishes. Also, game engines are becoming more difficult to write (since there is so much technology involved in making realistic real-time images) so there are economies of scale in having a common engine. Finally, I think people will increasingly expect games to be multi-platform, which is most easily done if games are written in a portable way.

If game design does go this way, I think it will be a positive thing for the games industry - it will mean that more of the resources of the game can be devoted to art, music and story-telling. This may in turn open up whole new audiences for games.

Language optimized for refactoring

Friday, October 24th, 2008

One property of computer languages that is important but often seems to be overlooked is how easy it is to refactor programs written in them.

The one example that springs immediately to mind is renaming a class. In C++ this is a bit more difficult than in many languages because the constructors and destructors have the same name as the class, so you have to go and change all of those too. PHP wins here for calling them __construct and __destruct respectively.

If you are in the school of thought that has C++ method definitions in a separate file (e.g. .cpp) to class declarations (.h), you have to go and change things in two different files (even if you're just adding a method that nobody calls yet). If that class implements an COM interface defined by a .idl file then there's yet another thing you need to change.

Python's syntactically-significant whitespace is another winner here because if (for example) you put another statement in an "if" clause that currently only has one statement, you don't have to add braces.

I'm sure there are many other, deeper examples.

Once you go OOP, there's no going back

Thursday, October 23rd, 2008

Object Oriented Programming is at least as much a state of mind as a set of programming language facilities. When I learnt C++ it was a bit difficult to get used to writing object-oriented programs but now that I've been doing it for many years I can't get used to thinking about my programs any other way.

I was writing some PHP code recently and (not knowing about PHP classes) started writing it in a procedural fashion. After a while I noticed that many of the functions I was writing started to fall naturally into classes (with a first parameter that gave the function context). So it was only natural to re-write it in object-oriented style once I figured out how to do so.

In the process of doing so, I found lots of bugs in my original code (which I had thought was rather nifty). Many functions became much simpler. I also found it was much easier to do various optimizations that would have been very difficult to do without classes (such as minimizing the number of database queries). My code file did become somewhat bigger, but I attribute this to the extra indentation most lines have, and the fact that PHP requires you to write "$this->" everywhere.

I also tried writing a C program (from scratch) for the first time in a very long time a while ago. I found myself using an object-oriented style and implementing vtables as structs.

Javascript exchange site

Wednesday, October 22nd, 2008

Back in the 80s, most home computers used to boot into a dialect of BASIC. This made it very obvious how to start to learn to program - just type things in and try things out to see what works.

Modern computers are much richer in many ways but do have the disadvantage that it's less obvious how to start programming. One could even be forgiven for assuming that the typical off-the-shelf Windows Vista machine doesn't even come with a built in programming language. Actually there are 3 (at least) - the windows command shell language, VBScript and JScript (JavaScript). The windows command shell language (the descendent of the MS-DOS batch language) is ugly, badly documented and almost impossible to debug so lets skip that one. Between VBScript and JScript, the latter is better to learn because it's cross-platform and VBScript is Windows only. There are two ways (at least) to run JScript in Windows - one is through the Windows Script Host (wscript.exe or cscript.exe) and the other is through the web browser. The latter is a graphically rich, interactive and familiar environment so I think that's the way to go.

JavaScript is a much nicer language than the 8-bit BASIC dialects from the 80s but it's still not very discoverable. The tutorials and reference guides are all out there but you have to have a text editor open in one window, one browser window for your program and at least one other browser window as containing your reading material. I think that this is a problem that could be solved with a website.

I'd like to see a site which does for Javascript what computers booting into a BASIC interpreter did for BASIC - a one-stop shop for (at least beginner-level) Javascript development. It would allow you to type Javascript code right into a web page and see its output right there on the page immediately (perhaps with separate divs within the page for the Javascript code, the program's output and tutorials).

The code editor might have syntax highlighting, intellisense, a built-in debugger - whatever can be provided to make programs as easy as possible to develop.

Once you've written some code you can save it on the website and access it from anywhere. You can also share it with friends. If one person defines an object someone else can use that object in their programs. In this way, a rich ecosystem of scripts can develop.

Another possible refinement would be for the web server itself to provide some abilities that scripts can use. Perhaps just storing a small amount of data per script per user so that scripts can do some persistent stuff, or perhaps allowing some server-side JavaScript as well as the client-side scripts, to enable the writing of rich AJAX web applications.

TODO-list management website

Tuesday, October 21st, 2008

I'm a big user of TODO lists. I generally keep a text editor open with at least one todo.txt file (either general or project-specific).

It would be nice to have a website to manage these lists of tasks and use them to help manage time and generate schedules. The schedules should be quite informal - each item should fall into one of three categories - tasks that should take less than a day, tasks that will probably take more than a day (and should be further broken down to get an accurate schedule) and tasks that have not yet been placed into one of the previous two buckets (more details on this costing algorithm).

The site should also have the ability to suggest the next task and allow the user to create dependencies between tasks (e.g. A must be completed before B can be started).

PHP could be more secure

Monday, October 20th, 2008

Given that PHP is designed to be used to write applications that run on web servers, you'd think it would have been designed rather more with security in mind.

In particular, PHP's dynamic typing seems to be a source of security weaknesses. Dynamic typing has advantages in rapid development and code malleability but is not particularly helpful for writing secure code - security is greatly helped by being able to restrict each variable to a specific set of values and having the compiler enforce this.

Similarly with the SQL API - because the interface is all just strings instead of strongly typed objects, SQL injection vulnerabilities becomes all to easy to write.

Variable scope is another one - because there are no variable declarations it's not obvious where variables are introduced, so one could be using variables declared earlier without realizing it (this is why register_globals changed from default-on, to default-off, to deprecated to removed).

Then there are ill-concieved features like magic quotes, and missing features like cryptographically secure random number generation.

A well-designed language for web development would be secure by default when doing the most obvious thing - one shouldn't have to go out of one's way to learn what all the security pitfalls are and have to write to explicitly address each of them (and update your code when the next such pitfall is discovered).