June 2010 Archives

Working from home, pros and cons

| 1 Comment
Just over three months ago I started a new position where I worked from my home office, for a company in another state.

Now most of you who say "That's an awesome job" having hearing only that, I must correct you. A sucky job would suck DOUBLE if you were home alone doing it. I like my job for some the same reasons someone might like their office job. 

The main difference is that the typical office banter has been eliminated from my life. Sometimes this is good, and it allows me to concentrate. Sometimes this is bad, because I just want a conversation with someone who is not my family, about something that isn't work. This is probably for me one of the two 'cons' for working at home. Sometimes its lonely, and sometimes I just want to be away from my family.

The other con is that I could spend a week or more at a time without leaving the house. This would mean reduced exercise, and like I mentioned above, limited human interaction.

The Pros for programming out of your house are almost exactly what  you think, except not as big. During my breaks, I can let my dogs out, or I can move laundry from the washer to the drier. I can spend Lunch time with my wife/kids, or even visit my kids school at lunch time for activities that they might be having. The latter are also possible if your office is close to your child's school, so that is not unique to working at home, it just happens to be an advantage I picked up.

Following a good routine that you used to go to work before is a good Idea, and I very much benefit from such a routine.

If you are thinking of programming from your house, I suggest getting 'dressed' for work everyday, and having some sort of daily accountability (we have daily team meetings via phone).


Perl!

Yay Perl!, 

Hope you are all having fun at YAPC

Good procedural programming

Am I allowed to complain a bit here? it's my blog and so I feel I am allowed to complain a bit.

However with my complaining I will provide a solution that would solve the problem.

So, my problem I am having is this: I am working on a large ball of spaghetti, and I am coming across sub names that are similar to the following: create_user, do_updates_on_primary, etc...

My complaint is this: all of the subroutines take in two variables (sometimes a $dbh, and other setup vars), and then do their work. The two variable names that they always take in: $vars and $q. $vars is a large hash, and $q is exactly what you would expect when in a cgi environment.

These subroutines then use $q and $vars to determine exactly how it should work, and then they do their jobs, including pulling the data out of $q. Instead before calling these functions, there should be a bit of setup ie $user = { attrib1 => $q->param('asdf') }; and then: create_user($user).


Ironman status?

Does anyone know if/where there is any postings about ironman statuses? Part of this that I was looking forward to, was seeing my status change. Any comments on that?

I have kept up, and I haven't seen the status badges since october 3rd when they were last updated, (and later removed).


code quality vs. economics part II

| 2 Comments
The last time I wrote about this, the problem that I had encountered was that there was an sql query that was select * from table where conditions. Then Later in the perl code it would strip more rows out that could have been done with a 'like' in the sql. 

Everybody decided that it would take just as long and probably longer for the database server to retrieve the larger dataset and send it across the network and then have it processed on the webserver than if it had just done the LIKE itsself.

I will not argue this point. It most likely is faster for a single query to do it the 'right' way.

The problem here is scale. We have 5000+ corporate clients, all banging away on one mysql database. The thing to consider here is that the database server has many components to it, and some things are easy to add or make faster (RAM, Raided Hard drives), while other things are not: CPU's.

The reason it is more cost effective money wise, is that anytime you free up the cpu to do other tasks, its a win. So, with the transfer of data mostly relegated to the Hard drives  and network with minimal action required from the cpu, the cpu is then freed to perform another task for another queued request.

Basically we are moving cpu cycles from one machine to another, with all the network and drive overhead that entails, because it is not cheap to add another mirrored database server, however load balanced web servers are much cheaper and easier to come by.

I hope that this clears up the confusion. Is it faster to do it this way? Almost certainly not. 
However there is an economic benefit. This benefit is difficult to measure, and honestly countered by maintenance costs, but that is beyond the scope of this entry at this time.


About this Archive

This page is an archive of entries from June 2010 listed from newest to oldest.

May 2010 is the previous archive.

July 2010 is the next archive.

Find recent content on the main index or look in the archives to find all content.