Introducing YippieMove '09. Easy email transfers. Now open for all destinations.

A while back I wrote about Wikipedia’s selfish ‘nofollow’ linking. I made the argument that by tacking on the nofollow tag on external sites, Wikipedia is tricking Google and other search engines to think Wikipedia is even more important than it already is. Intentional or not, the policy causes some significant skew in search engine results due to how many people like to Wikipedia.

Turns out that Wikipedia didn’t focus the skew purely on itself. Techcrunch has noticed that Wikipedia gives Wikia preferential treatment. Wikia is a commercial project, in part founded by Wikipedia’s founder Jimmy Wales.

Again this is not necessarily intentional but the skew is yet worse now. Wikipedia collects massive numbers of inbound links without giving anything back to the web community. Instead Wikipedia channels its importance into promoting a commercial project of the founder’s choosing. This is very unfortunate.

Author: Tags:
Introducing YippieMove '09. Easy email transfers. Now open for all destinations.

Agency Byte has a good article up about scope creep. Scope creep is the unfortunate tendency where project scopes tend to balloon up during a contracting job. Most clients don’t know what they want exactly, but they’ll be sure to tell you during your work. And they’ll expect you to add these new things that were obviously needed without any additional payment.

Personally I believe that an agile development method is a good way to counter scope creep. When you deliver small incremental improvements frequently there will be fever surprises for the customer and many more chances to make adjustments. In fact this might be one of the most important features of agile development. Of course you still need a clearly defined scope or your project may never end.

So for that agile scope, or if agile isn’t your thing at all, Agency Byte has plenty of advice for you. Here’s a good one:

“Defining what is “out of scope” can be as important as defining what is in, especially because you probably already know the areas in which clients usually tend to push the limits. Write them down and include them in your proposal.”

Read all of the tips from Agency Byte’s here.

Author: Tags:
Introducing YippieMove '09. Easy email transfers. Now open for all destinations.
Apr
05.

If you’re a regular reader of our blog, you may remember an article a while back about a piece of software called Cacti. It’s a nifty little web-based program that gathers information from a variety of hardware using SNMP. Cacti then presents the data in easily readable graphs.

At the time of the article, I installed Cacti for one of the organizations that I administrate the IT infrastructure for. Not only did I get a better idea of the utilization of bandwidth and hardware on the servers, but I could also see how much CPU resources the workstations were consuming. Although I knew that the CPU usage on the servers was quite low, I didn’t anticipate that the CPU usage on the workstations was quite as low as it was.

The organization is quite a typical office environment with 20-some workstations running mainly our own software plus web, e-mail, word-processing and spreadsheet applications. The hardware is quite modern with CPUs ranging from 1.8 Ghz to 2.7 Ghz Celerons and RAM between 256 MB to 512 MB. All workstations are running Windows 2000 Professional. Before I installed Cacti, I thought that the CPU usage during day use would average maybe 30-40%, with some significant peaks pushing up the average. However, I was quite surprised to find out how wrong my estimate was. It turned out that the average CPU usage on these workstations was less then 10% for all machines, and less than 5% for most of the machines with only few significant spikes. It’s true that Cacti only polls information from the workstations every 5th minute, but it should still give a quite accurate value these passed months as I’ve been running it.

Cacti Monthly

Sample of the monthly view in Cacti

With this data on hand it’s quite obvious that we’ve been over-investing in hardware for the workstations. Even though I would rather overestimate a little bit than underestimate, it seems my estimates were far too high. Even so, when purchasing new workstations, we’ve pretty much bought the cheapest Celeron available from the major PC vendors, so maybe it would have been hard to adjust the purchases even with these figures on hand.

After some thinking I came up with three possible ways to deal with this problem:
1. Ignore it
I guess this is what most companies does. Maybe the feeling of being ‘future-proof’ is valued more than the fact that you have a lot of idle-time. The benefit of doing this is that you have modern hardware that is less likely to break than old hardware.

2. Buy used hardware
Most people in power would be scared of just thinking about this. However, since there is no really new low-end hardware available, this appears to be the only way. The first problem you will be facing is probably to find uniform hardware. As an administrator, you know how much easier it is to administrate 20 identical workstations, rather than 20 different workstations, both in terms of drivers and hardware maintenance.

The second problem I came thinking of is the reliability problem. Obviously 5-10 years old hardware is more likely to break than brand new hardware, and if it does, there is no warranty to cover it. However, if you buy used hardware your budget will likely afford you to buy a couple of replacement computers.

There are also security implications of buying used computers. Every modern company with an intelligent IT staff is really concerned about security, both software and hardware. If you buy used hardware, there is a chance that it might be compromised (hardware sniffers etc.) I guess the only way to deal with this problem is to carefully physically inspect all the hardware you purchase.

If you or your company do choose to buy used hardware, there are plenty of sources to do so. One of the more interesting pages I found was a company in Australia, called Green PC, which sells a variety of computers and peripherals for a reasonable price.

3. Donate idle-time (to internal or external use)
With the rise of clusters, distributed computing and virtualization, there are today plenty of ways to put idle-time to good use. One of the more famous projects that deal with this is Folding@Home, which is a project at Stanford University that uses the participants’ idle-CPU/GPU time to do medical research. More recently a project at Berkeley called BOINC created a program that lets the user choose between a variety of distributed computing projects within the same application. By participating in such project, the company will create positive publicity (if the participation is significant).

If your company isn’t interested in donating idle-time to charity/research, they might still be able to use the idle-time. If all your workstations are connected with a high-speed connection (preferably gigabit), you might be able to use the computers in a virtualization environment. However, this is doable in theory, but I don’t know how well this would work in reality. Another alternative might be to use the idle-time to internal computations. If your company is in the software-business, distributed compiling might be one way to use the CPUs more efficiently. If this is not interesting, there are plenty of distributed computing solutions that might be used for intranets for various calculations that the company might else use a 3rd party company to compute.

Hopefully you have a better idea of how you can use your idle-time more efficiently, but you should be careful though. Some people argue that today’s computers are not built to run at 100% utilization 24/7. This is a very valid point, since neither the components on the motherboard, nor the fans is likely to stand a 100% utilization for very long without breaking. Therefore it is recommended to try to find an optimal distribution algorithm that spreads the calculation over the nodes without pushing the individual workstations until they break. I have to admit that I don’t have any available data on how the life-time of the workstations will be affected by running these type of softwares, but I would guess that it will have some impact on the life-time.

To round up this article, I would like to discuss one question that is highly relevant: “Why are there no low-end, cheap computers available?”

If you go to Dell or HP’s homepage and look for their cheapest ‘office’ hardware, it’s still far more than what is required for most office use. So why is it this way? As I see it, there are several reasons for this, involving both the software and hardware manufacturers in a mutual effort to stimulate sales. Obviously, the hardware manufacturers want us to replace our computers as often as possible, since this is how they make their profits. The software manufacturers on the other hand, want to sell new versions of their softwares by implementing new fancy features, that is unlikely to add to productivity, but requires hardware upgrades to run properly.

Let’s say you’re onboard with my ideas, and that you decide to look for cheaper hardware, but still feel like used hardware is too risky. One possibility might be to go for some kind of ITX-solution. These comes with a less powerful CPU, and often includes everything you need for desktop usage, but costs less than regular computers. One benefit of using ITX boxes is that they are very tiny and light, which makes them cheap to store and ship internally.

Author: Tags:
Introducing YippieMove '09. Easy email transfers. Now open for all destinations.

A while back I became responsible for the IT infrastructure for another company. Fortunately, these type of companies tend to be easy to manage with the right software and hardware. However, this time I thought about trying something new. For many, many years I’ve been running Linux on the server for a variety of companies, but now I’m thinking about taking things one step further; migrating the desktops. I already did some initial research about what applications the company is using, and it appears as the few ‘Windows only’ applications they’re using can be run through Wine.

With the impressive improvements Ubuntu done to Linux on the desktop, it might be about time to give it a shot. Ubuntu Desktop comes with all softwares needed for the business (word-processing, spread sheet, web-browser, e-mail etc.), which makes it convenient to install and maintain.

At the moment this other company is in really bad shape in terms of their IT infrastructure. They have no server, and use one of their workstations to ‘share’ files used for critical business application. Obviously this needs to change.

If we start with the server, I’m considering using either Ubuntu Server (which goes well with Ubuntu Desktop), or CentOS, which is a clone of RedHat Enterprise Linux for the operating system. As for the hardware, I’m considering some cheap Dell or HP server, with 2 extra SATA hard drives to run on a RAID 1 array for the critical files. Furthermore, the server needs to have a UPS attached to ensure good uptime.

So now we have both our server and desktops running Linux, and we get that good feeling in our chest. Now what? The next (and final thing) is to set up file-sharing and central-user administration. The problem with this step is that there are many different paths to choose. However, I chose tree different options to consider.

    Samba PDC login with Samba file-sharing

  • Benefit: Works well even in a mixed environment with Windows Machines.
  • Drawback: No native UNIX filesystem, hence no support for UNIX privileges on files.
    Kerberos with NFS file-sharing

  • Benefit: Been around for quite some time. Fully compatible UNIX filesystem. Recommended by FreeBSD’s handbook.
  • Drawback: More complicated to setup. A 2nd slave server strongly recommended.
    NIS with NFS file-sharing

  • Benefit: Also been around for quite some time. Fully compatible UNIX filesystem. Easy to setup.
  • Drawback: Old. Not as secure as Kerberos.

All of these solutions may be run with OpenLDAP as a back-end, which makes administration and integration of other application easier later on. In one way the flexibility of Linux/Unix may become somewhat of a disadvantage. Since there are almost infinitely many ways to approach this problem, it involves a great amount of research.

In my research, I posted first a post at Ubuntu’s forum, and then later a post at Gentoo’s forum. However, none of them really gave me a good answer. Right now I’m leaning toward Kerberos, since it appears to be more secure and robust.

Since this project is scheduled for the summer, I haven’t started implementing this yet. However, I’m curious about what you readers think. How would you approach this problem?

Author: Tags: ,
Introducing YippieMove '09. Easy email transfers. Now open for all destinations.
Mar
15.
Comments Off
Comments
Category: Business

Both Alex and I have been very busy lately, and therefore we failed to find the time to write on the blog. However, this doesn’t mean that we’re not working. This passed weekend we added two job ads on Craigslist; one web-designer and one PHP developer.

For each ad, Craigslist charges $75, which must be considered quite low, considering the number of people you’ll reach and what the competitors charge.

Just a few minutes after we posted the ad the applications started to come in. Below you’ll find the distribution of applications we have received so far.

Craigslist Applicants

Next week, Alex and I are going to sit down and go through all the applicants in the ‘To Be Considered’-section to find the best applicants. When that process is done, we will contact the best applicants in order to set up an in-person interview.

Author: Tags: ,

© 2006-2009 WireLoad, LLC.
Logo photo by William Picard. Theme based on BlueMod © 2005 - 2009 FrederikM.de, based on blueblog_DE by Oliver Wunder.
Sitemap