I’ve spent quite a bit of time, over the last 5 or 6 days, diving into WordPress and learning what makes it tick. Parts of WordPress are really impressive – just flat out cool. The way some of it works is fairly complex and deciphering it sometimes means reading page after page after page to understand an entire routine. But sometimes, when you finally see, end to end, how something in WordPress works – I mean really see individual bits of the engine – you have to admit it teaches you a little about PHP. WordPress, underneath it all, is a pretty big beast and its strength and ubiquitous presence comes largely, I think, from the fact that it can do virtually anything. The really sweet plugin system, the ways hooks work, “The Loop,” the dynamic options panel – it’s all very educational.
The interesting thing here is that I’ve browsed the source of Slash, Scoop, phpNuke, and now WordPress, and all of them are definitively more complex and much heavier than the entire OSNews codebase. Now, before you jump all over me – firstly, Slash and Scoop are Perl, and I don’t really read Perl, so I can’t speak as an expert there. Secondly, WordPress and Nuke both are very portable and dynamic, whereas OSNews has a narrow focus and, location-wise, is very static. But that aside, OSNews has withstood simultaneous link bombs from Slashdot and Digg. As amazing as WordPress is, it’s mostly amazing that it functions at all and loads in less than 2 minutes per page with as much going on as I can see behind the scenes. That’s not a cut on WordPress, by the way.
In fact, if anything , what is really impressed upon me is how smooth and simple OSNews code is, if I may be so bold. OSNews runs superfast due, in part, to lots of creative caching, some on-demand, some via cron. But it also does so because of highly efficient queries that are measured for speed on their JOINs, meaning in some cases, it’s faster to do 20 simple queries than one complex one, or build a long and scary chain of “OR x=a OR x=b OR x=c OR x=d…” Watching WordPress code in action is really fun for me, but watching OSNews work knowing what I now know about how much work PHP can cram into its threads is even more fun.
I’ve definitely a wordpress fan. I’ve been on the fence between using joomla and wordpress but because of the huge community for wordpress, it definitely makes it more user friendly.
The coolest thing about WordPress code is how simple they’ve kept it. They didn’t make a grand OOP scheme taking you hours to find and replace some core functionality. It also helps you understand how to write your plugins without having to refer all that much to the API documentation.
But then comes a long all their security bummers. So I guess they aren’t to be praised endlessly for their efforts.
@Benjamin:
I hear ya on all fronts. In my case, I just think the concept of extending core function via filters, hooks, and actions is really freakin’ cool. And, although there is plenty of OOP (in the form of $comment and The Loop), you’re right, learning WordPress is not hard, it just takes some time to understand the inclusion loops.
Your reference to join performance caught my attention. I’m assuming you’re using an SQL-based RDBMS, and x is indexed (or x is the first column in an multipart key using an ordered index, e.g., tree rather than hash).
There are two main possible reasons you are able to beat the performance of the join:
1) The query plan generated is performing a sequential scan of the table (looking at all rows) to find the matching rows, rather than internally doing individual index lookups (the disjunctive normal approach), and the table is relatively large. This means either the statistics are bad for the table or the query optimizer isn’t doing a good job.
2) You are, in fact, only interested in some subset of the rows returned, and are able to examine only the smaller number by doing individual searches. This is something which can be (and you would have) accomplished using a (partially, in your code) navigational/relational interface, rather than a fully SQL/relational interface. That would be the case, for example, if you were only concerned with whether or not at least one matching row was present, as would be the case to manually check a constraint of some sort.
Thus concludes the DBMS lesson for the day 😎
@James:
Actually, what I really meant was that too many crappy db designers either over- or under-normalize their tables, and as a result have way too many database queries or, conversely, an overload or poorly performing complex queries.
At OSNews, particularly as we transitioned to version 3 and “the OSNews janitor” and “soup nazi” errors become more prevalent, we had to learn a lot about how to do lots of database querying with minimal impact. Turns out that lots of the old queries could be majorly compressed. And in some cases, some sections of the site were better off using speedy little queries because some queries that used terms like HAVING were very heavy. Certain statistics pages, especially, were just brutal on the CPU, but designing the interaction differently meant getting the same data in some cases up to 100x faster.
At work we are a SQL server shop, but we use MySQL at OSNews, and it works differently enough that I’m not much aware of updating statistics or researching query paths. I can tell you that I do lots of testing on indexes, and ~ OH BOY! ~ do they make a difference!