Monday, April 30, 2007

They're Gonna Filter Our SMSs!

This one's hilarious in a very sad way. Our government is going to start filtering our SMS (and possibly MMS) messages. I read the news here, and I have no other sources with more information, but this is exactly the kind of action you could expect from this kind of imbeciles.
And if I know them, they will execute it in the most outrageous and stupid method. And they will cite "protection of civil rights" as their motive, not once mentioning other alternatives like letting people themselves blacklist the forth parties that they don't want to receive. No sir! People must be "protected" in spite of themselves. And I guess everybody in Iran knows that given the option, the first SMSs people will bar would be advertisements and other junk sent from all kinds of government organizations and corporations!
Of course, if implemented, the whole process will be closed. The contract will be given to someone's cousin, and we (the minor group called "the people") will have no way at all to influence the system or even request information about it.
Man I get angry sometimes.

Wednesday, April 25, 2007

BitTorrent Share Ratio Now More Than Half

I use the BitTorrent network extensively to download stuff, and hopefully to distribute stuff in the future. I'm not going to talk about the merits of BT in comparison with other client-server and P2P technologies (it is better!)
I use Azureus for a BT software (or client?) and I'm fine with it. It has a lot of features that one will use (and some that I don't,) and a plug-in system that I haven't used yet. My only bone to pick with Azureus is that it's a Java program, therefore it's somewhat (a little, really) heavy on resources and supposedly a bit slower than it could have been. But it's the best client I've seen. (It's taking a hideous turn in their new version 3.0 though. Think Windows Media Player 6.4 versus Media Player 10! *shudders*)
Another very good BT software is µTorrent. It is fast, small (think 200KiB!) and it gets the job done. It doesn't even need installation! The only problem is that it's not free software. It's not even open-source! Anyway, I suggest µTorrent to all.
Anyway, the reason that I'm rambling on about BitTorrent is that my share ratio finally crossed the 0.5 point last night. That means the number of bytes that I've uploaded is now more than half of the bytes that I've downloaded (via Azureus only, on Mike.) The numbers are 135.10 GiB for upload and 270.00 GiB for download in 264 days, 20 hours. I know that a 0.5 share ratio is not something to be proud of, but it's at least better than everybody I know about (over here in Iran.) Considering the fact that I'm on an ADSL connection (Assymetric DSL) which has an uplink bandwidth of half of its downlink, I guess my ratio is OK.

Tuesday, April 24, 2007

Updated Movie List

Once again, the list is hidden initially to conserve space. I'm going to clean up my archive in the near future, removing some duplicate movies (with different format or qualities,) and some unworthy movies too.
Also, I'm archiving my... err, archive, on WORM media (i.e. DVD.)

Wednesday, April 18, 2007

And Nothing Else

The S&M version of Nothing Else Matters is just awesome!
That's it, and I don't care how stupid that remark is considered by other fans. I have nothing more the say!

Thursday, April 12, 2007

Boost 1.34 is Near

If you program in C++, and you've been doing so for more than a year, and you do not know about or use the Boost libraries, you're missing the point of C++. Go get it. Please do so! It's not just that boost libraries are very useful as tools, but also the fact that even reading the prefaces to the documentation gives you insight into software design in general and C++ programming in particular. Delving into the source is an eye-opener for any C++ programmer who thinks he/she knows the language.
The parts are so useful that I've been using them (or wishing to be able to use them) in all my projects in the past year. And it's expanding by the week. Also, many of the libraries that are currently part of Boost, have been accepted into the standard library of the next version of C++ language, what people call C++0x (because it will hopefully come out before 2010, but we don't know when.)
From the traffic on the boost developers' mailing list (which is very high volume and highly technical, certainly higher than my level,) it seems that the 1.34 version is near (the current version is 1.33.1) which will contain many new parts, most notably asio. Of course, you can get all the bleeding edge stuff mostly from their respective homes, and definitely from the boost CVS repository (which I do,) but keeping your compiled versions up-to-date where a full rebuild takes 20 minutes on my system, in addition to the chance of slight incompatibilities is a bit more than I'm prepared to take on my plate.
Anyway, go and check it out if you're not already familiar with it. See the amazing stuff these people are doing. Maybe I'll try to write about some of the boost libraries that I'm more familiar with. Maybe!

Sunday, April 01, 2007

Must We Multi-thread? (Part 3)

Up to this point, we've talked about how processor clock speeds are not advancing as fast as they used to, we may have yet to hit the physical limits of the current generation of chip-making technology but we're not far from it. Dual-core CPUs are quite the norm now, and quad-core is becoming popular. I'm guessing we're going to see 16 or more threads in an end-user-level system in less than 2 years. That means we have to write concurrent software to be able to exploit this fact, or you'll be swept away by the people who do write scalable programs. On the other hand, the data we need to operate on gets larger by the minute and memory access rates are becoming more and more the bottleneck. This emphasizes the role of cache hierarchies. But as the number of threads goes up, so does the rate of cache misses. This is but one of the problems we face in multi-threading our code.
Also, writing a concurrent program is hard. It's certainly harder that the serial version (most of the time,) both because we are used to (or trained to be used to) thinking or designing serially, and also most of our tools and languages focus on that. I'm not an expert on the theory of complexity, but it seems to me that designing a scalable parallel program (or algorithm) is innately much harder than ordinary program design.
A third source of problems are race conditions, deadlocks, livelocks, priority inversions and a plethora of other pitfalls and hazards. They make analyzing, testing, debugging and making guarantees about a program's behavior much harder.
Another hardship is performance. As it happens, naive ways of achieving concurrency can either lead to a lot of bugs due to lack of proper synchronization, or a performance penalty because of excessive and/or misplaced locking or flawed design. This performance hit (which mainly comes from bad design, but can be the from the overhead of the synchronization primitives) can be so huge that the parallelized version may even be slower than the serial version! Achieving (near) linear performance boost (linear in the number of hardware threads) is no simple task, even in the simplest and best cases (e.g. when data is not shared between the threads.)
Many programmers, faced with the task of parallelizing the design of a program, go the road of task parallelism, which means they divide that tasks a program has to do among the threads. The threads execute different operations on (the same or different) data. Pipelining is a variation of this method. This has several advantages. It's usually simple enough, it usually minimizes (or at least restricts) the data sharing between the threads and it can be fast and it can be generic and applicable to a broad range of situations.
But the approach is not scalable. There are not many situations that you can divide the program into 10 or more meaningful, mostly independent and orthogonal tasks and still keep the benefits of this approach. There's almost no way to divide a program into 1024 tasks!
Another approach is data parallelism and I will cover the meaning in a later post.
One way to evade many of the above problems is to go for a coarser-level of parallelism. That is, divide the program into many programs and not many threads. This can be used (most of the time) whether your design parallelizes tasks or data, but it's not applicable to many high-performance and real-time applications because whilst the design gets simpler and more manageable, the overhead of spawning a new process and interprocess communication can be noticeably higher than the same operations for threads. On some systems (namely Windows,) it's really significant. I will get a little bit more into this in the future.