Introducing YippieMove '09. Easy email transfers. Now open for all destinations.

If you’ve followed us for a while, you might remember our article ‘Selecting an Accounting System’. Not too many things have changed since then. The developers over at Quickbooks are still not competent enough to write an OS independent web-app, and no other major events have occurred. The most interesting thing that have happened since we wrote the article was that SQL-Ledger silently changed license from GPLv2 to SQL-Ledger Open Source License. Although we would rather have seen SQL-Ledger staying GPLv2, we don’t really blame them. The company needs to make some money in order to survive. So, despite the change of license, we still decided to use SQL-Ledger as our accounting system. At least for now.

Enough about that, let’s get started. First time I installed SQL-Ledger I ran into a couple of problems. Even though I spent a fair amount of time doing research on how to set everything up, I still hit a couple of speed bumps. Since I’m quite new to PostgreSQL, some of my problems were related to this. Some other problems were related to the fact that the manuals did not really match what I saw on the screen.

Now, let’s get started. First we start with updating the ports to make sure we’re getting the latest version.

# portsnap fetch; portsnap update

After updating our ports-tree, we need to install PostgrsSQL. When writing this post, the 8.2-series is the latest stable series.

# cd /usr/ports/database/postgres82-server/
# make config

Select the flags you prefer. The only flag I changed from the default was the optimization flag. Not that I know if it makes much of a different, but if you’re compiling it you might as well try to build more optimized binaries.

# make install

Now Postgres is installed. However, in the normal FreeBSDish manner, we need to enable it in rc.conf before we can fire it up.
Edit /etc/rc.conf and add postgresql_enable=”YES”

When you’ve activated Postgres, we first need to initiate the database, and then start the service.

# /usr/local/etc/rc.d/postgresql.sh initdb
# /usr/local/etc/rc.d/postgresql.sh start

Voila, now we have Postgres initiated and up running.

Let’s take a look at the package we’re actually interested in installing: SQL-Ledger.

We’re just going to use a simple ports-installation of SQL-Ledger.

# cd /usr/ports/finance/sql-ledger
# make install

After installing the package, we need to make changes to Apache in order to enable SQL-Ledger. Simply add the following line at the appropriate location in your httpd.conf or ssl.conf:

Include /usr/local/etc/sql-ledger-httpd.conf

To make sure our Apache-file is intact, we run:

# apachectl -t

And if that went well, we run:

# apachectl restart

The only step remaining now is some database-related stuff.
Since I’m not an expert at Postgres, I might not be explaining this in the best way. Anyways, I’m just trying to share my experience, since this was the step where I ran into some speed bumps when first setting up SQL-Ledger. The problems I was facing was that the prompts I received differed quite a bit from the ones found in the various manuals I read before installing.

First, log into the Postgres user

# su - pgsql

When you’re logged in, we need to create a user for SQL-Ledger:

# createuser -d -P sql-ledger
Enter password for new role:
Enter it again:
Shall the new role be a superuser? (y/n) n
Shall the new role be allowed to create more new roles? (y/n) y

Note that if you remove the -P, you won’t be prompted for password. However, I personally prefer setting a password here.

Lastly, we need to copy the template.

#createlang plpgsql template1

Hopefully that went without any problems. Now it’s time to surf into SQL-Ledger to make the final configurations. Open your browser and surf into http:///sql-ledger/admin.pl. Log in without any password.

SQL-Ledger Login

Note that I’ve already set up a password when taking this screenshot.

SQL-Ledger - PG database
Select the “Database Administration” link.

Use the user ‘sql-ledger’ and the password you assigned when creating it. For ‘connect to,’ use ‘template1.’ When you’re done filling that out, hit ‘create dataset.’

SQL-Ledger - DB admin

The next screen that will pop up is the Create Dataset-screen. Here you need to set the name of your dataset. Use only lower-case letters. I’d suggest the name ‘sql-ledger’ to keep things from being complicated later on. For ‘encoding,’ select UTF8. As for the ‘chart of accounts,’ it really depends on what business you’re setting up SQL-Ledger for.

SQL-Ledger - Create Dataset

You’re done! All you need to do now is to set up the users. Since this is quite straight forward, I’m not going to cover that.

For more information, please visit sql-ledger.org. You might also want to take a look at this ‘unofficial’ manual (the official one costs $190).

Edit 1: Dieter Simader, the founder of SQL-Ledger, e-mailed me to point out that the only non-GPL’d version of SQL-Ledger is the 2.8-series. The 2.6-series used by FreeBSD ports is still under GPL.

Author: Tags: , ,
Introducing YippieMove '09. Easy email transfers. Now open for all destinations.

The articles here at PWW tend to be a bit more in depth than this, but I thought this might be a good tips that many would benefit from. As you’ve probably figured out by now, both Alex and I are Mac users, and just adore the design of Apple’s products. As a result of this, both Alex and I bought the Wireless Mighty Mouse to use with our laptops. A couple of days ago my Mighty Mouse stopped scrolling up. It was weird, because I could still scroll down and horizontally. Since this was very annoying, it became the first thing on my priority list to fix.

After some googlin’ and reading on a couple of Mac forums, I found the solution. Press down the ‘scroll ball’ hard. This sounds like a weird thing to do, but after checking some other sites that said the same thing, I tried it. After pressing the ‘scroll ball’ down quite hard the scroll feature started working again.

Author: Tags: , ,
Introducing YippieMove '09. Easy email transfers. Now open for all destinations.
Apr
23.
Comments Off
Comments
Category: Technology

Ok, so we’re not really there yet, but it really looks like many big players are aiming towards this within the next few years. The list of softwares moved to the web can be made long. Although Google received a lot of press for their Google Docs and Google Spreadsheet, there are other at least a handful of equally interesting products. A company named GravityZoo aims to bring the entire desktop online. In their attempt, among many things they’re working on porting OpenOffice to the Internet. What makes this more interesting than Google Docs is that instead being a commercial product, they must release the source code of their product. What this means is that companies might be able to use this solution for their intranets, which means sensitive information never needs to leave the companies network. Although many might argue that if one uses Google’s commercial service, the data is still safe, even if it’s online. However, since many larger corporations IT policies strictly states that internal information is not allowed to leave local network, utilizing a web-based OpenOffice or their intranet will enable them to get the benefit of the web app, without sensitive information ever leaving the corporate network. Moreover, with a simple VPN solution, even road warriors will be able to take advantage of this solution.

No, lets look at the ups and downs of using web-apps instead of traditional softwares. When I think of web-apps, the first thing that comes to mind is the administrative aspects. One of the largest benefits of administrating web-apps rather than traditional apps is that you don’t need to configure each and every one of you desktop machines with the particular software. Although you probably want to install a more secure browser than Internet Explorer if they’re running Windows, this is really all you need to do on the clients. Another quite obvious benefit is the platform independence. If you’re web-app is well written, it should work in any browser on any platform, witch is a great thing, since you don’t have to spend money on porting your software to a variety of platforms. Moreover, if you have a variety of platforms, file sharing tend to be a hassle. If you’re running a web-port of OpenOffice, with built-in file-management, you don’t need to worry about this anymore.

So what’s the downside? I spent quite some time thinking of drawbacks of using web-apps, but could only really come up with one; that it might be less responsive. If you’re on a slow connection, lets say over the internet, it might be very annoying with the delay it causes. However, if you’re running the web-app on a local 100Mbit network, they delay of a well-written AJAX web-app should be quite small. I think that the largest obstacle to overcome is the mindset of the users.

Talking about web-apps, we at WireLoad are planning to make a web-port of FireFox. We also talked about porting this blog to the web…

Author: Tags:
Introducing YippieMove '09. Easy email transfers. Now open for all destinations.
Apr
05.

If you’re a regular reader of our blog, you may remember an article a while back about a piece of software called Cacti. It’s a nifty little web-based program that gathers information from a variety of hardware using SNMP. Cacti then presents the data in easily readable graphs.

At the time of the article, I installed Cacti for one of the organizations that I administrate the IT infrastructure for. Not only did I get a better idea of the utilization of bandwidth and hardware on the servers, but I could also see how much CPU resources the workstations were consuming. Although I knew that the CPU usage on the servers was quite low, I didn’t anticipate that the CPU usage on the workstations was quite as low as it was.

The organization is quite a typical office environment with 20-some workstations running mainly our own software plus web, e-mail, word-processing and spreadsheet applications. The hardware is quite modern with CPUs ranging from 1.8 Ghz to 2.7 Ghz Celerons and RAM between 256 MB to 512 MB. All workstations are running Windows 2000 Professional. Before I installed Cacti, I thought that the CPU usage during day use would average maybe 30-40%, with some significant peaks pushing up the average. However, I was quite surprised to find out how wrong my estimate was. It turned out that the average CPU usage on these workstations was less then 10% for all machines, and less than 5% for most of the machines with only few significant spikes. It’s true that Cacti only polls information from the workstations every 5th minute, but it should still give a quite accurate value these passed months as I’ve been running it.

Cacti Monthly

Sample of the monthly view in Cacti

With this data on hand it’s quite obvious that we’ve been over-investing in hardware for the workstations. Even though I would rather overestimate a little bit than underestimate, it seems my estimates were far too high. Even so, when purchasing new workstations, we’ve pretty much bought the cheapest Celeron available from the major PC vendors, so maybe it would have been hard to adjust the purchases even with these figures on hand.

After some thinking I came up with three possible ways to deal with this problem:
1. Ignore it
I guess this is what most companies does. Maybe the feeling of being ‘future-proof’ is valued more than the fact that you have a lot of idle-time. The benefit of doing this is that you have modern hardware that is less likely to break than old hardware.

2. Buy used hardware
Most people in power would be scared of just thinking about this. However, since there is no really new low-end hardware available, this appears to be the only way. The first problem you will be facing is probably to find uniform hardware. As an administrator, you know how much easier it is to administrate 20 identical workstations, rather than 20 different workstations, both in terms of drivers and hardware maintenance.

The second problem I came thinking of is the reliability problem. Obviously 5-10 years old hardware is more likely to break than brand new hardware, and if it does, there is no warranty to cover it. However, if you buy used hardware your budget will likely afford you to buy a couple of replacement computers.

There are also security implications of buying used computers. Every modern company with an intelligent IT staff is really concerned about security, both software and hardware. If you buy used hardware, there is a chance that it might be compromised (hardware sniffers etc.) I guess the only way to deal with this problem is to carefully physically inspect all the hardware you purchase.

If you or your company do choose to buy used hardware, there are plenty of sources to do so. One of the more interesting pages I found was a company in Australia, called Green PC, which sells a variety of computers and peripherals for a reasonable price.

3. Donate idle-time (to internal or external use)
With the rise of clusters, distributed computing and virtualization, there are today plenty of ways to put idle-time to good use. One of the more famous projects that deal with this is Folding@Home, which is a project at Stanford University that uses the participants’ idle-CPU/GPU time to do medical research. More recently a project at Berkeley called BOINC created a program that lets the user choose between a variety of distributed computing projects within the same application. By participating in such project, the company will create positive publicity (if the participation is significant).

If your company isn’t interested in donating idle-time to charity/research, they might still be able to use the idle-time. If all your workstations are connected with a high-speed connection (preferably gigabit), you might be able to use the computers in a virtualization environment. However, this is doable in theory, but I don’t know how well this would work in reality. Another alternative might be to use the idle-time to internal computations. If your company is in the software-business, distributed compiling might be one way to use the CPUs more efficiently. If this is not interesting, there are plenty of distributed computing solutions that might be used for intranets for various calculations that the company might else use a 3rd party company to compute.

Hopefully you have a better idea of how you can use your idle-time more efficiently, but you should be careful though. Some people argue that today’s computers are not built to run at 100% utilization 24/7. This is a very valid point, since neither the components on the motherboard, nor the fans is likely to stand a 100% utilization for very long without breaking. Therefore it is recommended to try to find an optimal distribution algorithm that spreads the calculation over the nodes without pushing the individual workstations until they break. I have to admit that I don’t have any available data on how the life-time of the workstations will be affected by running these type of softwares, but I would guess that it will have some impact on the life-time.

To round up this article, I would like to discuss one question that is highly relevant: “Why are there no low-end, cheap computers available?”

If you go to Dell or HP’s homepage and look for their cheapest ‘office’ hardware, it’s still far more than what is required for most office use. So why is it this way? As I see it, there are several reasons for this, involving both the software and hardware manufacturers in a mutual effort to stimulate sales. Obviously, the hardware manufacturers want us to replace our computers as often as possible, since this is how they make their profits. The software manufacturers on the other hand, want to sell new versions of their softwares by implementing new fancy features, that is unlikely to add to productivity, but requires hardware upgrades to run properly.

Let’s say you’re onboard with my ideas, and that you decide to look for cheaper hardware, but still feel like used hardware is too risky. One possibility might be to go for some kind of ITX-solution. These comes with a less powerful CPU, and often includes everything you need for desktop usage, but costs less than regular computers. One benefit of using ITX boxes is that they are very tiny and light, which makes them cheap to store and ship internally.

Author: Tags:
Introducing YippieMove '09. Easy email transfers. Now open for all destinations.

A while back I became responsible for the IT infrastructure for another company. Fortunately, these type of companies tend to be easy to manage with the right software and hardware. However, this time I thought about trying something new. For many, many years I’ve been running Linux on the server for a variety of companies, but now I’m thinking about taking things one step further; migrating the desktops. I already did some initial research about what applications the company is using, and it appears as the few ‘Windows only’ applications they’re using can be run through Wine.

With the impressive improvements Ubuntu done to Linux on the desktop, it might be about time to give it a shot. Ubuntu Desktop comes with all softwares needed for the business (word-processing, spread sheet, web-browser, e-mail etc.), which makes it convenient to install and maintain.

At the moment this other company is in really bad shape in terms of their IT infrastructure. They have no server, and use one of their workstations to ‘share’ files used for critical business application. Obviously this needs to change.

If we start with the server, I’m considering using either Ubuntu Server (which goes well with Ubuntu Desktop), or CentOS, which is a clone of RedHat Enterprise Linux for the operating system. As for the hardware, I’m considering some cheap Dell or HP server, with 2 extra SATA hard drives to run on a RAID 1 array for the critical files. Furthermore, the server needs to have a UPS attached to ensure good uptime.

So now we have both our server and desktops running Linux, and we get that good feeling in our chest. Now what? The next (and final thing) is to set up file-sharing and central-user administration. The problem with this step is that there are many different paths to choose. However, I chose tree different options to consider.

    Samba PDC login with Samba file-sharing

  • Benefit: Works well even in a mixed environment with Windows Machines.
  • Drawback: No native UNIX filesystem, hence no support for UNIX privileges on files.
    Kerberos with NFS file-sharing

  • Benefit: Been around for quite some time. Fully compatible UNIX filesystem. Recommended by FreeBSD’s handbook.
  • Drawback: More complicated to setup. A 2nd slave server strongly recommended.
    NIS with NFS file-sharing

  • Benefit: Also been around for quite some time. Fully compatible UNIX filesystem. Easy to setup.
  • Drawback: Old. Not as secure as Kerberos.

All of these solutions may be run with OpenLDAP as a back-end, which makes administration and integration of other application easier later on. In one way the flexibility of Linux/Unix may become somewhat of a disadvantage. Since there are almost infinitely many ways to approach this problem, it involves a great amount of research.

In my research, I posted first a post at Ubuntu’s forum, and then later a post at Gentoo’s forum. However, none of them really gave me a good answer. Right now I’m leaning toward Kerberos, since it appears to be more secure and robust.

Since this project is scheduled for the summer, I haven’t started implementing this yet. However, I’m curious about what you readers think. How would you approach this problem?

Author: Tags: ,

© 2006-2009 WireLoad, LLC.
Logo photo by William Picard. Theme based on BlueMod © 2005 - 2009 FrederikM.de, based on blueblog_DE by Oliver Wunder.
Sitemap