Made of Everything You're Not

Personal blog of PHP programmer Eric Lamb.
  • Blog
  • Portfolio

Hey!! You There, Pussy! Don't Be A Pussy.

Posted in Brain Dump, Business, IT, Programming, Rant on September 28th, 2009 by Eric Lamb – 2 Comments

Working in IT requires balls; you have to make some really tough choices with very real consequences. It's not really a problem for programmers; very few of us work on projects that has the potential to destroy lives or break companies apart. On the other hand, in IT, you're dealing with the backbone of an organization. Make a mistake here and: You. Are. In. Trouble.

Don’t Be A Pussy

Not to worry though; try as hard as you want to not fuck up and it's just going to happen that much sooner.

I can say with absolute certainty that there's going to come a time in your career when you fuck up. Big. Like really BIG. The type of mistake that has the potential to sink the company or client you're working for/with. When it happens it's going to be bad. So bad that you'll have the fear of Dad in you. You remember that right? When Dad was coming home and you knew he knew what you did and you knew your life was over. If you didn't have a Dad; think shear panic mixed with absolute paranoia and terror. Yeah, that's the stuff.

What you did/will do isn't important. What is important is how you deal with it. You're going to have options when it  comes to dealing with the issue(s) and how you act is going to determine how your colleagues and peers look at you for the next few months. Make the wrong call and you're in for some real uncomfortable silences and some really awkward sidelong glances.

If this has already happened to you; congratulations. Just know it probably won't be the last. On the other hand if it hasn't happened yet get ready; it will. You're going to make some stupid mistakes in your career; mistakes so idiotic and so demoralizing your confidence will shatter and you'll have a hard time getting back on the horse.

Like I said above, I have absolutely no idea what you do or what you can do to fuck it up so, as anecdotal examples only, I'm going to rely on my personal experience. I can honestly say, with absolute pride, that I have done the following:

  • Deleted a database and couldn't restore the data
  • Deleted all the rows in a table and didn't have a backup
  • Deleted a user account and all the email and files associated with it.
  • Changed every users password to "password" in a database
  • Sent an internal cost analysis report for a client project to the client

And that's only what was off the top of my head; I'm sure I've blocked out some of the worse things. The one constant between the above list (aside from the stupidity involved) was that I owned the mistake. You have to immediately handle the situation whatever that means (it'll depend on the situation).

After that though a funny thing will happen; it's very likely your confidence will be shot. This is important because you need confidence (read: balls) to work in IT. There are too many things, that you just don't know how to do, that you're going to have to do, and that requires the confidence to know you can do these things. It's why we make the big bucks.

In my experience the only thing you can do in these situations is get back on the horse ASAP. The sooner you do something, anything, that has consequences the better. You can't wallow in the past and getting hung up isn't the answer.

BTW: After reviewing the above I have to say:

Thank fucking God I don't work in IT anymore.

When Did Performance Stop Being Important?

Posted in IT, Programming on September 21st, 2009 by Eric Lamb – 10 Comments

Now that I'm finally starting to "get" the Zend Framework I'm starting to have some serious doubts on whether I made the right choice; not in choosing Zend over another framework but in choosing any framework at all. The memory usage is just abysmal across the board and after working with the Zend Framework for about a month or so it's not entirely clear if it's going to scale as I need it to.

When Did Performance Stop Mattering?

Which lead to the question of why.  At the moment it seems like a question of speed of development versus performance (which is ironic because Zend Framework is not easy or speedy to develop with).

<disclaimer>
To be fair, it's not just frameworks that have an uncomfortable overhead. Just take a look at Joomla and Drupal; 2 popular content management systems with an absurd overhead. It's just easier to focus on my current interest rather than the CMS'es.
</disclaimer>

One thing I'm having a hard time getting comfortable with is how much memory is required when using a php web framework. Out of the box both Zend and Symfony (for example) use around 5MB per request. Understand, this is without any custom code. Just setting up the MVC and Autoloader for the default views and models. Nothing impressive or useful and 5 fucking MB to run that?

After having been on the wrong end of this issue on my own code I'm pretty sensitive to how my code performs; I've written some nasty algorithms and watching them crumble in real time has a tendency to turn you around wink

Researching the issue doesn't really help. There's a lot of advice on how to improve the performance but it seems to always center around common sense improvements you should be using anyway.

The most touted improvement I've heard is that you have to use a PHP accelerator and opcode cache. I just find that response flawed but not because it's bad advice but because it's common sense. Yes, it's true, but not using a framework in combination with a PHP accelerator and opcode cache is still better in my experience. All relying on those tools does is move the baseline for performance, which you're supposed to do already, and a framework still consumes a good amount of resources on it's own.

In my experience you get about a 50% reduction in memory usage when using something like x-cache but using the Zend Framework  still leaves a total of 2.5MB of memory usage to accomplish the bare minimum setup.

One saving grace is that hardware is cheap. Scaling with hardware is usually the go-to escape when the bottleneck is the code but it's not without it's own set of issues. For one thing while it's true that hardware is cheap the labor to maintain that hardware is not. Especially if you want to maintain the server in a proper and responsible manner.

Another option, that's really only available when using the Zend Framework, is decoupling the project from a direct dependence and not use the MVC components. In anticipation of doing this I've been writing a lot my recent code and projects in a style that'll allow easy(ish) separation when the time comes.

At this point I haven't used a framework in a production environment so all of this consternation might be for nothing. I just have a hard time accepting the performance hit of half a MB for using something trivial like a content management system (drupal) or, for example, a component like Zend_Navigation compared to the benefit. What are they actually doing to make the cost worth while?

Still another option is to just walk away from this whole OOP thing and head back to the familiar touch of procedural php and using functions and classes as more of decorators to apply than core components.  From my personal experience, and only my experience, using OOP is way more expensive than procedural. At the end of the day I need my programs to work fast, be easy to operate for my users and have a low impact on the server. How does using OOP help that?

At the moment I'm not sure how this is going to work out. I am confident it'll be an adventure though. Hopefully, I find out how Zend will scale before a project of mine goes viral or gets popular. Hopefully.

Living in Two Worlds

Posted in IT, Programming on August 12th, 2009 by Eric Lamb – 0 Comments

I generally consider my professional persona to be a software guy first and a hardware guy second. My first passion is code, through and through, but I have also spent a good deal of my time performing the day to day office IT stuff and, usually, I have a good time doing it. More than that though; I've always found that working on the hardware is a good way to know how my software is going to interact with the hardware. Read: It makes me a better programmer.

Blow My Mind

Needless to say, I have some ideas about hardware setup and deployment; a philosophy if you will. I try to be pretty humble about it but I couldn't help but be reminded of this a when, a few weeks ago, I was listening to Stack Overflow podcast #59.

This one was cool; they had Damien Katz on who, if you don't know, is the creator of CouchDB and used to work on Lotus Notes (back when the Internet didn't matter). Smart guy.

(BTW, if you don't know who he is I highly recommend you read his blog. Start with this post called Signs You're a Crappy Programmer.)

Anyway, like I said; good podcast. Up until the end that is when Joel and Jeff completely blew me away with the following dialog when they were discussing a question on ServerFault about disabling your page file (around 1:01:44 in the podacast):

Joel: There's a problem that we've always had, and it's more common, I hate to say this, it's more common among Unix system administrators than Windows system administrators, which is, they get the thing out of the box, they get the operating system out of the box, they install it, and then they're going to want to do 47 things to that system before they can use it. Mostly removing things that were put there that they don't understand.

So they have this attitude that's like, "What are all these services that are running; I'm going to kill all of these services and then my server will be really fast."

And then, all of a sudden, ok, it works for a while and then you go and install FogBugz, and it doesn't run because some basic service, that everybody else has, has been removed, severely deleted from the operating system, by some system administrator that thinks they know better but, really doesn't.

Jeff: You sound really bitter about this.

Joel: I am bitter because it's all over tech support calls. It comes from people who are like... There is generally a philosophy that security flaws come from things, often come from things, that you don't even realize you have running. And that probably shouldn't be running.

I had to rewind the podcast when I heard that part. Was Joel really suggesting that we leave the default services enabled on an operating system? Did I just hear Joel Spolsky imply it was bad to disable and remove unneeded services from a computer?

Yup, I think I did.I also don't think it's the best idea to keep the default configuration on a server. Why? Because an OS is released with the goal of a good out of box experience not security. For example, does your Linux web server really need CUPS running? Does your Windows server really need Windows Media Player to start every time you start the thing?

Now I'm totally willing to accept that I'm being naive; this is knowledge gained from experience not instruction. But it'd have to be a compelling argument.

But, to be clear, you disable services and programs, not to improve performance, but to improve security and reliability. (Performance improvement should be a side effect in my opinion.) The thing I think Joel might be missing is that he's more than likely dealing with some pretty busy system administrators. They probably did something to keep FogBugz from working, and forgot what it was, so they called support.

What's In Your Toolbox?

Posted in IT, Programming, Servers on August 05th, 2009 by Eric Lamb – 1 Comments

One thing almost all computer users have in common, regardless of vocation, is that we use the computer to achieve some goal. The actual goal doesn't matter so much as the fact that we're using the computer to do something that, otherwise, we wouldn't be able to do. To do so though we use various tools that are, usually, purpose built for the task.

Let's be honest; without the tools we would be useless.

What's in your Toolbox?

Personally I love my tools; specifically, I find developer tools to be some of the most interesting and fun toys available. I don't want this to turn into a fanboy post but, in the interest of honesty, it just might. You have been warned...

What's in my Web Developer Toolbox?

My toolbox is full of programs that are purpose built to help every step along the way for building Internet applications. Over the years, like pretty much all developers I'd guess,  I've come to rely on the below tools to ease the pain of development as much as possible. I totally vouch for these tools.

Version Control

This one's crucial. If I had to rank these (and I really don't plan to) version control would be at the top of the list. There's a whole slew of options available but, for me, version control starts and ends with Subversion.

Yes, there are all sorts of hype surrounding GIT and Mercurial but, because right now, I work alone my needs are way too simple for anything like distributed version control. Nope; just give me Tortoise and an SVN URL, with credentials, and I'm a happy camper.

Local Development Web Server

Once upon a time the thought of using a local development web server was heresy to my style and philosophy. Now that I've been using one for the last year I have to admit I was dead wrong. Dumb even.

Previously, I would always use an external Linux server for all my development. The idea was that since the finished site would be hosted on a Linux server it was important to develop the site in the same environment. There are 2 big problems with this approach though; one is that the project is more likely to be dependent on the environment which can make relocation a problem, and two, is that continued progress on the project requires a connection to the Internet.

On the other hand, developing your projects on a local machine requires finesse and forethought to ensure porting the site from one environment to another doesn't lead to anarchy. There's also the knowledge and insight gained from setting up an environment by hand; there's so much to gain from doing this it's just silly not to without some edge condition.

Text Editor

Only masochists use Notepad for text editing. In today's world of fast CPUs and large amounts of RAM it's really hard to believe anyone would use Notepad for anything other than the most basic of basic editing tasks. If you plan on having a file open, for editing, for any extended period of time it's just stupid (yes; STUPID) to use it.

Instead, I prefer EditPlus for all my text editing needs. Why? For one killer feature; a right click context menu item. Right click over any file and choose EditPlus to open the file for editing; it makes working very fluid and continuous.  It even handles files in the hundreds of MBs with ease.

There are other options for a text editor (Notepad2 comes to mind) but EditPlus is tough to beat.

Database Tools

Sure, a command line tool is perfectly adequate for administering a database server. The problem though is that I develop on a Windows machine and not using a GUI tool for database administration is kind of silly. It's like the people who only use VIM for text editing; it's like trying to prove a point against all logic.

Using MySQL, the easiest, and most familiar tool, is phpMyAdmin; but for remote administration phpMyAdmin starts to break down. Instead, you can't beat the MySQL Administrator. It offers all the functionality as phpMyAdmin as well as a slew of advanced functionality like the ability to create stored procedures (which you should never, ever, do) and functions.

Remote Connectivity

Since pretty much everything I work on has to go somewhere and I usually need to connect directly to a server for administration I need tools that'll allow me access. These tools really come down to 3; FTP, SSH and RDP.

For simply moving files between servers FTP or SFTP is obviously the choice. There's bunches and bunches of options when it comes to FTP clients but I've been using CuteFTP for years and, pretty much, I swear by it. Yes, it's a paid product but CuteFTP is also low impact, easy to use and, more important, doesn't get in the way of my productivity.

And then there's system administration which requires full control and access to a server or computer.

Linux has SSH which requires a small client utility. There's a shitload of options available here but they're all pretty similar so there's not much difference between using, say, PuTTy or SSH Secure Shell Client (or any of the myriad other SSH clients out there). You just need to have one.

For Windows as far as I'm concerned there's really only 2 options; Remote Desktop Connection (RDP) and VNC. For ease of use and quality of experience RDP is the way to go. HINT: There's a setting to allow mapping of HDDs on the client machine to the server for easier file transfers.

Virtual Machines

In my opinion there's been no bigger advance in quality assurance (QA) then the advent of the virtual machine (VM). QA probably wasn't even a goal when VMs were first conceived but, boy, have they filled the gap well.

I've already gone into detail about my choice of VM tool as being Virtual Box:

VirtualBox handles the resource detail pretty elegantly; in that it doesn’t use the resource until it needs it. This means that instead of instantly having 10GB of your hard drive used up VirtualBox will only use the amount already taken. You can tweak the settings for a VM whenever you want so you can get just the right mix.

Web Browsers

You need pretty much every modern browser under the sun here. Um... duh?

So, there you go; those are the primary tools I use for web development. There are definitely some other tools I use that didn't make the list (RegexBuddy, Photoshop and diff tools come to mind) but I didn't feel they deserved mention because of how rare they're used.

Did I miss anything else?

What Does Zend Server CE Have to Offer?

Posted in IT, Rant, Servers on July 29th, 2009 by Eric Lamb – 0 Comments

Since I had to setup a whole new computer I decided to move away from the IIS experiment I've been working on for the last year and try something a little different. I'd heard about Zend Server CE before but after a failed attempt to get it working a few months ago, because of IIS ironically enough, I hadn't really given it the attention I thought it deserved. After having played with it for about a week I have to say I'm completely... underwhelmed.

Failed to Login

Zend Server is supposed to be a complete Web Application Server that is purpose built for php development. It includes application monitoring, problem root cause analysis and and extended caching capabilities. Pretty enticing really.

Unfortunately, Zend Server CE doesn't include any of the above bells and whistles. Instead, it's a stripped down version that appears to just match the features and functionality of XAMP or WAMP (Apache, php and mysql wrapped in a nice little installer for Windows).

I've used both used both XAMP and WAMP and, with little exception, I've always wished I'd gone with a manual installation instead. It's not that they're bad programs, it's nice that they're available for newbies, but my needs aren't easy to package up in a "one size fits all" package. I like to try new things and experiment and sometimes what I want to do isn't easy without breaking something. Admittedly, I haven't tried to use any one size package for a few years so this may not be the case anymore.

Either way though, I know I have a bias; I might even be a bit of a snob about the issue. Totally possible.

That being said, after installing Zend Server CE, which went very smoothly actually, I was confronted with what appeared to be an incomplete installation of php; php-win just didn't work. It did nothing in fact; I couldn't get it to do a damn thing. Since I do a little maintenance scripting with php-cli (and php-win.exe is essential on Windows) this was a pretty big issue.

On top of that, I just couldn't figure out how to modify the --configure options so changing the setup was obviously going to be an issue. I don't know if I'm an idiot but I just couldn't figure it out.

Then the let down happened; I was under the impression that there were going to be some cool profiling toys to play with. Instead, there's a web GUI for configuring PHP, which is pretty nice I guess, but for me, it's just easier to edit php.ini directly than navigate through a web interface. Kind of useless. What with the integration with Zend Debugger I was really expecting more.

Ultimately, it seems that if you're a complete newbie to php Zend Server CE is a worthwhile fit but if you actually know what you're doing you're still better off setting up a development environment manually.

This is pretty disappointing. A product from Zend, that's supposed to ease the pain of php development being released to the community, offering nothing more than you could already get from a dozen other programs kind of seems like posturing. I understand the desire to have a demo of a paid product but it should, you know, be different.

A good change I'd like to see would be to include some of the more advanced features like the Application problem diagnostics and the Application monitoring (alerting) functionality in the CE version. It would benefit the community far better than the current version.

mtop/mkill - MySQL Monitoring Tools

Posted in IT, Programming, Servers on July 08th, 2009 by Eric Lamb – 0 Comments

The Linux command "top" is one of the most used and powerful jewels in a developers pocket.

In most Unix-like operating systems, the top command is a system monitor tool which produces a frequently-updated list of processes. By default, the processes are ordered by percentage of CPU usage, with only the "top" CPU consumers shown. The top command shows how much processing power and memory are being used, as well as other information about the running processes. Some versions of top allow extensive customization of the display, such as choice of columns or sorting method.

The top command is useful for system administrators, as it shows which users and processes are consuming the most system resources at any given time.

MySQL Top

The great thing about "top" also highlights one of it's weaknesses; it's focused on CPU, memory (RAM) and time. Top is wonderful if you want to know how much performance your program is using but if you want to know how much the individual components are using you're out of luck.

Enter mtop.

mtop (MySQL top) monitors a MySQL server showing the queries which are taking the most amount of time to complete. Features include 'zooming' in on a process to show the complete query, 'explaining' the query optimizer information for a query and 'killing' queries. In addition, server performance statistics, configuration information, and tuning tips are provide

mtop is a pretty useful program; it really helps in finding out the trouble spots in queries. There one obstacle to consider before diving into though; mtop is written in PERL so there are a couple module dependancies (Curses, DBI, DBD::mysql, Getopt::Long and Net::Domain

Still, I didn't run into any issues installing the program and, so far anyway, mtop is a nice addition to my tool box.

The Damned 2147483647 Issue. Again.

Posted in IT, Programming on July 06th, 2009 by Eric Lamb – 4 Comments

Here's another random MySQL issue I run up against from time to time; an obscure duplicate key error.

#1062 - Duplicate entry '2147483647' for key 1

The reason for the issue is because the script is trying to use an INT column type with a number larger than 2147483647 and 2147483647 is the highest number you can have on a 32bit system.

2147483647

The number 2,147,483,647 is also the maximum value for a 32-bit signed integer in computing. It is therefore the maximum value for variables declared as int in many programming languages running on popular CPUs, and the maximum possible score for many video games. The appearance of the number often reflects an error, overflow condition, or missing value.

The data type time_t, used on operating systems such as Unix, is a 32-bit signed integer counting the number of seconds since the start of the Unix epoch (midnight UTC of January 1, 1970). The latest time that can be represented this way is 03:14:07 UTC on Tuesday, 19 January 2038 (corresponding to 2,147,483,647 seconds since the start of the epoch), so that systems using a 32-bit time_t type are susceptible to the Year 2038 problem.

Online microblogging service Twitter faced a similar problem (called a "Twitpocalypse") at 2009-06-12 23:52:04 GMT, when the unique identifier associated with each tweet exceeded 2,147,483,647. While Twitter itself was not affected, some third-party clients were, and had to be patched.

Ironically, the issue that precipitated this post was for a Twitter app. A little advise; if you're storing twitter post ids don't use INT for the column type smile

It's a pretty common error actually; definitely not one to be too embarrassed about (you should be a little embarrassed though). In fact, if you've ever played a video game you've probably already seen it.

A number which is commonly found in hacked games and the score will be 2147483647. It is also the highest score you can get in a 32 bit game. The maximum score is 2147483647 because most games are written in 32 bit and it has to represent both negative and positive integers so 2^31 - 1 would be 2147483647. It is also commonly known as Mersenne Prime which a prime number that is one less power of two.
1. h4x0r - 2147483647
2. h4x0r - 2147483647
3. h4x0r - 2147483647
...

noob: OMG... someone has hacked the game and got 2147483647
h4x0r: beat that score noob

Perhaps the coolest effect, or scariest if you're a pussy, issue with 2147483647 is, as shown above, what's being called the Year 2038 problem.

The Year 2000 problem is understood by most people these days because of the large amount of media attention it received.

Most programs written in the C programming language are relatively immune to the Y2K problem, but suffer instead from the Year 2038 problem. This problem arises because most C programs use a library of routines called the standard time library . This library establishes a standard 4-byte format for the storage of time values, and also provides a number of functions for converting, displaying and calculating time values.

The standard 4-byte format assumes that the beginning of time is January 1, 1970, at 12:00:00 a.m. This value is 0. Any time/date value is expressed as the number of seconds following that zero value. So the value 919642718 is 919,642,718 seconds past 12:00:00 a.m. on January 1, 1970, which is Sunday, February 21, 1999, at 16:18:38 Pacific time (U.S.). This is a convenient format because if you subtract any two values, what you get is a number of seconds that is the time difference between them. Then you can use other functions in the library to determine how many minutes/hours/days/months/years have passed between the two times.

All this from one number.

You Can't Embed Videos in Email People

Posted in IT on June 30th, 2009 by Eric Lamb – 3 Comments

Being the Director of Technology for a marketing agency is full of surprises. The best surprise was learning that everyone is an expert. Yup; regardless of experience, or knowledge, you're going to run into all sorts of people who know everything about subjects that are incredibly complex and deep.

Don’t Embed Videos in Email

This was made apparent the other day when I received a link to an article called Hot Ecommerce Trend: Embedded Video in Email.

The article starts out with the general, and true, premise that linking to videos is a good way to improve click through rates:

Anna Yeaman reports one retailer boasting a 20-27% click through rate without linking to video, and 51-65% with links to video. And Forrester Research reports video in email can increase click through by 2-3X.

Of course there's no mention of what the campaigns were about, or who the sample targets were in relation to previous email campaigns. But what the hell, it's the Internet, so we grain of salt the numbers and don't give them any real weight. Interestingly, though the article never comes right out and says it, from the tone of the article it looks like the idea is that if linking to a video is good embedding the video directly is better.

The article goes into some detail about the obvious challenges of actually embedding a video in an email, instead of just linking to the video, (like spam flagging and file size mostly) but just doesn't offer a satisfactory response on how to accomplish it. Basically, it says embedding a video is a good thing to do but doesn't provide any solutions to do it.

This is probably because, oh, I don't know, IT'S NOT A GOOD THING TO DO (hmm, bold, italics, all caps? serious text). Just don't do it if you want your email to, you know, be seen by people.

According to the article:

What’s so tough about embedding actual video in email?

Deliverability is the issue. Large video attachments are often a red flag for spam filters, and ISPs (Internet service providers) block “complex data” including Javascript for security reasons.

ISPs have banned Javascript Sending video in an e-mail has been a challenge for deliverability, since large video attachments often alert spam filters. The way that Goodmail gets around this issue is that their e-mail class, called CertifiedEmail, is a paid service that does not go through typical e-mail filters.

Forgiving for a second the nonsensical flow of the second paragraph, it seems there's a service called Goodmail that, somehow, allows email to bypass the spam filters. Unfortunately, Goodmail is currently only in use by AOL at the moment so it's not realistic unless you have a list of only AOL users. Does anyone, besides AOL, send to just AOL? Exactly.

Next the article talks about sending to just gmail; provided the recipient signed up for YouTube video embedding in their gmail account settings they shouldn't have a problem. This, too, suffers from the Goodmail problem of only working with one email provider. Not much of a solution really.

But, all of the above are minor issues that don't really deal with the larger problem; email wasn't made for embedded video. Sure, it can do it, in a kinda-sorta, if you squint and tilt your head kind of way, but it's similar to how HTML isn't a design language but has been re-purposed for that use.

Email was built for text; attachments weren't even a possibility until 1996 with the introduction of RFC 2045. (The initial specification only allowed for text communication using 7bit US-ASCII as the encoding and had a limit on characters around 1,000 total.) Email with embedded video wasn't even a thought at the time.

How much was video not considered an option? Just take a look at the complete lack of support for certain HTML tags required to embed a video in the popular email clients.

To be honest, there weren’t a lot of surprises here. The OBJECT and EMBED tags remain as poorly supported now as they were 3 years ago. This instantly wipes out Flash, Quicktime, and Windows Media formats. As predicted, Java support was also a no show.

So, what are your options then? That really depends on whether quality matters.

If quality actually means something to you then you definitely should avoid the much touted animated gif "trick". Years ago this might have been a good idea but now, in the 21st century, it just looks dated. Once, videos and animated gifs were almost comparable but now the quality of online video just shines way too bright compared to an animated gif. It may be easy to do but definitely looks like amateur hour.

On the other hand, you could always link to the video. Why, for the love of god, this has to be stated plainly escapes me but c'mon people; just use an image and link to the video on your site. This has the added benefit of creating visitor to your property at the same time, and you never know, maybe they'll think about sticking around.

To be honest, I don't really see video in email EVER being a viable option. There are just too many problems inherit with the idea to make it possible, much less practical.

First, you have bandwidth, which is still one of the biggest barriers to doing anything cool online. Videos are big and delivering them via email is a sure way to drive people crazy. Have you ever tried to download an email with a big attachment? Same thing.

Second, as mentioned above the email clients don't even support the HTML tags required to render a video even if it's downloaded. This one is pretty big because those tags aren't allowed for a reason: malware. If an EMBED or OBJECT tag is used the bad people have a bigger sandbox to play with. The rule of not opening attachments to help protect against malware goes right out the window. Changing the email client programs is just not going to happen.  Sorry.

So, let's all get together and recognize that this isn't going to happen. Ever.

Real Reason I Avoid Pirated Software

Posted in IT on June 17th, 2009 by Eric Lamb – 9 Comments

I wish I could say I have moral reasons for not using pirated/cracked software but I'm pretty practical about the issue. Cracked software isn't something new to me; I've used it a lot. No, like I said, I'm more practial about it.

Cracked Software

BTW, because I've had too much to drink while I write this, I'm using pirated and cracked interchangeably though I know they both mean different things. Try to keep up...

I was once a big fan of not paying for software. While I was a kid and all through college I never paid for any computer programs; not one, not ever. I used to have collections of stuff that my friends and I would share with each other and our families (computers were HUGE in my family). Basic disk swapping really.

And then the Internet came into my world. I became very good at finding cracked programs (warez). First warez sites, then Morpheus, then Limelight, then Kazaa (sometimes all 4).

That said, a lot of people find it surprising that I hate to use cracked software now.

Anyway, I've found that pirated programs are a little unreliable. This could of course be attributed to the main software product, not just the cracked version, but I find that hard to believe (unless, of course, it's version 1 of a new Microsoft product). I've used both cracked and commercial versions of many programs and more often than not the cracked or pirated version is just more unstable.

Now, I admit, I've installed cracked programs that didn't have any viruses and worked as stable as you could want. But I've also ran into the other side where it just becomes a pretty big time investment.

There's also the whole virus thing which is definately a crap shoot; sometimes you're good, other times you get hit. But it's still an issue that, if you do get infected, now requires time to clean up. Kind of a bummer when it happens.

Another, smaller reason, is the whole update thing. A lot of companies (I'm looking at you Adobe) have started finding what versions are cracked and when updates are applied they disable the program. This means if you want to use a cracked program you had better not update it. Ever. No new security patches and no bug fixes. Sorry.

All the above is pretty lightweight though. I hate to say it, but if that's all there was to worry about I'd probably be all about it.

No, the biggest reason is because you just never know. You never know if the issue you're having with the program is because your computer sucks or because you're using a cracked version that's become unstable. How can you know?

If you rely on software for professional reasons (not a kid or student) it's just not worth the headache.

Setting Up A Linux Web Server

Posted in IT, Servers on May 25th, 2009 by Eric Lamb – 0 Comments

I've been setting up my own servers for years; it's something I'm pretty passionate about as a programmer. I've learned soooo much about programming from seeing how the computer operates; setting up a server is HUGELY enlightening in that respect.

Setting Up A Linux Web Server

On the other hand though, I have a lot of things to do with my day and setting up a server, something that's pretty rote, doesn't require my direct attention. Just someone who can really follow instructions.

With that in mind, I started to compile a list so I could delegate this activity to my team. It's pretty useful from a historical stand point too so in case anyone else is interested in just how to setup a linux web server; here you go.

I've broken the setup into multiple sections, each outlining a specific type of setup, so it should be easier to digest. I'm just over the whole "series" thing, so instead of breaking this post into several smaller posts I opted for one REALLY long post. Plus, there's the whole disconnect thing with a series and setting a Linux box up shouldn't be done piecemeal.

It should be noted that the below is just a vanilla setup; it should be considered the bare minimum that needs to be done.

Basics

These are just a couple things that have to be done at no particular time. Ideally, the Firewall (below) would be configured first and then everything else would follow, but the rest could be done in any order. To make life easier it would be a good idea to take care of the basic stuff ASAP.

Setup locate

There's a sweet little command that is extremely helpful in locating files on the server; locate.

From the manual page:

locate reads one or more databases prepared by updatedb(8) and writes
file names matching at least one of the PATTERNs to standard output,
one per line.

It's way faster and efficient than 'find' and it's just what you need to find all sorts of things quickly. A basic use works like so:

locate php.ini

Which outputs:

/scripts/php.ini
/usr/lib/php.ini
/usr/local/lib/php.ini

To use it though, the first thing you need to do is make a call to 'updatedb' which will create the 'locate' database. The first time it's ran it'll take around 5 minutes to complete so you want to get this done ASAP so it doesn't break up your flow when working on something else.

updatedb

Setup Host Name

This part will register the server with the rest of the Internet. It's a different way on quite a few of the Linux servers I've worked with so I won't go into detail on how to do configure the server; I'm just going to go over the steps. You can always ask your host to set it up if needed.

Probably the most important part is choosing the name. There sees to be a couple different camps on this subject. Some like naming it as the role (so if it's a web server it'd be web1 or web5 (whatever) or if it's a db it'd be db1 or db5. Other's feel this is a security risk because it broadcasts what the server does.

Still, others, choose names that are just arbitrary; wizardtower, methlab, etc...

Personally, I find the convenience of recognition (knowing web1 is a web server and db3 is a database server) far outweighs any security concerns so I go this route.

Whatever you choose make sure to register the name with your registrar as an A DNS entry. Once that's done then you can update the server with the new hostname.

Email Forwarding

For every user on the system that receives email, like root for example, you'll want to create a .forward file in that users home directory. This good so email can be sent to you personally outside of logging into the server.

echo you@domain.com > /root/.forward

Security

Just to reiterate the above, this is by no means a complete list of everything that can be done to secure a server. Consider this a list of the minimum that needs to be done; nothing more. You should still research secure based off of your particular situation.

Seriously, your needs may require additional levels of security. You've been warned.

Server Security

Firewall

Most modern installations of Linux come with a firewall installed called iptables which is really reliable and stable. I use iptables in conjunction with either ConfigServer's csf/lfd or, usually if I'm not on a cPanel or Webmin server, apf.

Personally, I prefer csf/lfd for managing my firewall. Not only does it take care of iptable configuration but it also:

  • Sends an email on ssh login or su usage
  • Blocks connections with excessive connections
  • Login failure notifications for a lot of common services (cpanel, ftp, ssh, etc)
  • Port scan tracking and blocking
  • Temporary and permanent IP blocking
  • System Integrity checking and alerts
  • Suspicious process alerts
  • Suspicious file alerts

apf requires adding a few other programs on to attain the same amount of coverage; I prefer simple.

To install csf/lfd ConfigServer made the process extremely easy; they even put together step by step instructions that have yet to fail.

Setup Users

Add non privileged user and add to wheel group. This is important because we're going to seriously limit access to the shell. I like to have just one user who can ssh into a server but who can't do anything but use 'su' to up their privileges. No access to anything but 'su', not even 'wget'.

Mount /tmp securely

It's important to mount your tmp directory securely so nothing contained inside can be executed. It really helps when your site allows file uploads or if the server has been exploited (the tmp directory is a prime target).

You want to mount the partition as noexec,nosuid.

Install RKHunter

RKHunter is a tool that scans your system looking for any rootkits. It's a good tool but it does report some false positives; nothing too annoying but it does happen.

It's pretty simple to install and I've yet to see it fail on any flavor of Linux I've used (and I've used a bunch).

wget http://superb-west.dl.sourceforge.net/sourceforge/rkhunter/rkhunter-1.3.4.tar.gz
tar -zxvf rkhunter-1.3.4.tar.gz
cd rkhunter-1.3.4
./installer.sh --layout /usr/local --install

Once RKHunter is installed you'll want to set a daily cron job so your system is checked regularly. To do that just create a shell script and place it in /etc/cron.daily/ as outlined in this tutorial for installing RKHunter

#!/bin/bash
(rkhunter -c --cronjob 2>&1 | mail -s "Daily Rkhunter Scan Report" root)

SSH

SSH is incredibly vulnerable for no bigger reason than visibility; it's the de facto entrance point for most linux servers. I like to do a couple things to secure my SSH installation.

To make any of the below changes you'll need to edit your sshd_config file. On most systems it's going to be in:

/etc/ssh/sshd_config

First, I always disable root login. Most brute force (BF) attacks on your server will be for the user 'root' so simply disabling this allows most BF attacks will be futile for the attacker. If you've added a new user to the system, and that user is the only user who can ssh in, you're in a pretty good spot. The attacker has to know the username in order to even try passwords.

Second, I also change the port ssh listens on. The default, 22, is what most attackers will try for getting into ssh. Change that to something different and you've added another level of complexity onto the system.

It's important to let your host know of the change so they can access ssh when they need to. This shouldn't be a problem for most hosts but you may have a fight on your hands for some.

There are quite a few options you can use to configure ssh for; it's definitely recommended that you research as much about ssh as possible to configure it specifically for your needs.

Disable General Commands

This next one isn't exactly critical but I find it useful and it definately adds peace of mind so there's that.

I first heard about this from a forum for securing a cPanel server.

Many php exploit scritps use common *nix tools to download rootkits or backdoors. By simply chmod'ing the files so that no none-wheel or root user can use them we can eliminate many possible problems. The downside to doing this is that shell users will be inconvenienced by not being able to use the the commands below. Mod_security really removes the need to chmod this, but it is an added layer of protection.

#chmod 750 /usr/bin/rcp
#chmod 750 /usr/bin/wget
#chmod 750 /usr/bin/lynx
#chmod 750 /usr/bin/links
#chmod 750 /usr/bin/scp

As mentioned above, this is probably overkill but it does prevent anyone who does gain access from being able to do much of anything. If you really want to have that warm, fuzzy, feeling of safety you could also just chmod everything under /usr/bin like so

chmod 750 /usr/bin/*

That should really make you feel safe.

Disable Unneeded Services

Chances are that your server is going to be running quite a few services you're just not going to need. For example there's 'cups' the Linux print service. Is your webserver going to be connected to a printer? Probably not.

Leaving these enabled is bad because it's an avoidable entry point into your server by the "bad" people. From my experience I've learned to disable a bunch so I put together a little shell script to just handle it for me. Copy the below and put it into a file on your server called 'disable_services.sh' and chmod it to 0755

#!/bin/bash
service cups stop
chkconfig cups off
 
service xfs stop
chkconfig xfs off
 
service atd stop
chkconfig atd off
 
service nfslock stop
chkconfig nfslock off
 
service canna stop
chkconfig canna off
 
service FreeWnn stop
chkconfig FreeWnn off
 
service cups-config-daemon stop
chkconfig cups-config-daemon off
 
service iiim stop
chkconfig iiim off
 
service mDNSResponder stop
chkconfig mDNSResponder off
 
service nifd stop
chkconfig nifd off
 
service rpcidmapd stop
chkconfig rpcidmapd off
 
service bluetooth stop
chkconfig bluetooth off
 
service anacron stop
chkconfig anacron off
 
service gpm stop
chkconfig gpm off
 
service saslauthd stop
chkconfig saslauthd off
 
service avahi-daemon stop
chkconfig avahi-daemon off
 
service avahi-dnsconfd stop
chkconfig avahi-dnsconfd off
 
service hidd stop
chkconfig hidd off
 
service pcscd stop
chkconfig pcscd off
 
service sbadm stop
chkconfig sbadm off

HTTP Server

I've gotten a renewed appreciation for Apache lately so I'm not going to focus on one more than the other. With the exception of mod_security, everything below should be possible on pretty much all your popular webservers like Apache or Lighttpd.

Web Server
If you're going to stay with the default web server that's installed on the server you should, at the very least, rebuild it to make sure you're using the most up to date version.

Once you're dealing with a new(ish) installation of a web server the next thing you need to do is create the default site. This is the site people will see when they put either the IP address or the hostname of the server into a browser. I always set it up to use a blank page instead of the standard or default page. This way it doesn't look janky when users stumble upon the server.

Next, you want to disable Indexes. This setting is useful to prevent people from hitting a directory and seeing all the contents. If a user does try to read a directory, "images" for example, they will get a 403 (Forbidden) page.

Another thing I like to do is change the cPanel and http server skeleton files to blank pages. This is nice so when another site is created the site gets setup with blank pages instead of the "advert" pages for the system.

ModSecurity is an open source intrusion detection and prevention engine for web applications. It operates embedded into the web server, acting as a powerful umbrella - shielding applications from attacks. It's really cool.

ModSecurity works with Apache but there's always people out there experimenting so, hopefully, other http servers should get coverage some day. If you're using Apache you should definately, 100%, no excuses not to, install ModSecurity

PHP

For all the jokes about PHP being a sub-par programming language, accoring to the TIOBE Programming Community Index for May 2009 it's the most popular web development language available. So suck it.

php

It is true that php's the only language with a configuration file. I admit; that's just fucked up man...

Improve PHP

PHP is an interpreted language so right away your scripts are going to have a performance penalty (compared to a compiled language like C# for example). To help alleviate this you should always, always, install some sort of OP code cache. My personal favorite is xCache but there's also eAccelerator to name just one.

Installing xCache is pretty straightforward and setting it up is just as easy. Once it's done you should notice a considerable improvement in performance.

You'll also need to upgrade PEAR to make sure you're using the latest versions of packages and such. It's pretty easy; from a command prompt:

pear upgrade pear

After that you'll want to make sure the below packages are installed and up to date. These are just what I personally use and what the majority of open source php projects I've seen use.

pear install HTML_QuickForm
pear install Table
pear install Cache
pear install Cache_Lite
pear install Mail
pear install Mail_Mime

Secure PHP

Pretty much all of the security stuff is done by configuring PHP in php.ini. If you don't know where it is you can either create a phpinfo() call and look for the path to php.ini or, better, just use the 'locate' command from the command line:

locate php.ini

Unless you're using an old version of php (and why the fuck would you?) it's going to come out of the box with the WORST setting php ever introduced; 'register_globals'. If you require this setting to be active, for new projects, you're an idiot. I do unfortunately know about legacy apps requiring register_globals to be on but you can just set it with 'ini_set' on a per project basis so turn it off FOR EVERYTHING NEW YOU DO.

You're going to want to disable quite a few functions too, if possible. There's definate use in a lot of the below functions and sometimes the project you're hosting is going to require some of them. For example, on php 5.2.9 popen is required for some PEAR packages (and itself I think). This should be done on a case by case basis. But if you don't need to keep these open DON'T.

Look for the string 'disable_functions' in your php.ini and add any of the below to that string.

disable_functions = show_source, system, shell_exec, passthru, exec, phpinfo, popen, proc_open, allow_url_fopen

Configure MySQL

One thing really; change the god damn root password!! Set it to something, anything is better than NOTHING.
MySQL

I've had some people recommend disabling LOAD DATA LOCAL but while it sounds like a good idea it doesn't really gel with the way I work. I need that enabled to import databases on the server when the file is too big to upload into phpMyAdmin. I'm sure I could just enable it, do my thing, then disable it again but that sounds... troublesome.

Run Benchmarks

Running the benchmarks on the server is probably the most important part of setting up a server. It's important because the results of the benchmarks are going to tell you what you have to do next.

Benchmarking

There are a few different tools for benchmarking a server but the most popular is ApacheBench. Accoring to Wikipedia:

ApacheBench is a command line computer program for measuring the performance of HTTP web servers, in particular the Apache HTTP Server. It was designed to give an idea of the performance that a given Apache installation can provide. In particular, it shows how many requests per second the server is capable of serving.

ApacheBench comes standard with the Apache distribution so on most systems it's already going to be there. There's already a really good tutorial on NixCraft on how to use ApacheBench so I won't go into it. Just check out the tutorial above and you'll have a good understanding to start with.

I will say that this portion should take a good a while to do properly because you'll be doing it a lot. You're going to want to tweak your HTTP server configuration to get the optimal performance and every time you make a change you're going to want to confirm the improvement.

It will get old.

Now, as I mentioned above this is by no means a definitive guide or anything. It's just the bare minimum that should be done when you're setting up a Linux web server.

« Older Entries
Newer Entries »
  • Subscribe: Entries | Comments
  • About Me

    Email Email
    Twitter Twitter
    310.739.3322
  • Categories

    • Brain Dump
    • Business
    • Code
    • IT
    • Programming
    • Rant
    • Servers
  • Archives

    • February 2012
    • October 2011
    • August 2011
    • July 2011
    • June 2011
    • May 2011
    • April 2011
    • March 2011
    • February 2011
    • January 2011
    • December 2010
    • November 2010
    • October 2010
    • September 2010
    • August 2010
    • July 2010
    • June 2010
    • May 2010
    • April 2010
    • March 2010
    • February 2010
    • January 2010
    • December 2009
    • November 2009
    • October 2009
    • September 2009
    • August 2009
    • July 2009
    • June 2009
    • May 2009
    • April 2009
    • March 2009
    • February 2009
    • January 2009
    • December 2008
    • November 2008
    • October 2008
  • Advertisement

Copyright © 2008 - 2018 Eric Lamb - All rights reserved