Category Archives: Technology

Three Required Programming Books

We were recently trying to hire some software engineers at work. Our usual approach with candidates involved a team interview session where the current developers all asked questions. A question that one of the developers on my team always asked was along the lines of, “What are three books that you think are important for all developers?” That’s not exactly how he asks it, but in my mind, I translate this into, “What three books would you expect any professional developer to be familiar with?”

It’s an interesting question and you can learn a few things about the candidate from the answer. I’ve thought about it, and I know what my answer is. I suspect it’s not the answer that would necessarily win me the most points if I were a candidate interviewing with that team. My answer certainly reveals two strong biases I have: I believe all professional programmers should be near-expert level C programmers (at least in terms of the language itself, not necessarily from a practical perspective of being able to successfully develop or manage a huge C project). I also believe that all professional developers should be familiar with The Unix Way. Because it is mostly The Right Way. Whatever the market says (clearly I’m about to disagree with the market…), it’s hard for me to consider Windows a serious enterprise application development and hosting platform, and it deserves little more than to be considered a passing fad.

Right. Back to the three books:

The C Programming Language, by Brian W. Kernighan and Dennis M. Ritchie. Given my first bias, this is an obvious choice. There has never been, nor will there likely ever be, as definitive or widely recognized a volume on any language. Read it. Know it. Love it. Everything that programmers do has C underneath it. While hiring is way too complex and there are too many other factors involved to boil things down to a simple litmus test, if life were that simple, I’d go so far as to say I wouldn’t hire any developer who hasn’t read K&R cover-to-cover at least one.

Advanced Programming In The Unix Environment, by W. Richard Stevens. Here you see my other bias playing out. I don’t really care what platform your current job is for. If you’re not able to dabble in Unix system programming (in C, of course), I’m not convinced that you have the same fundamental developer chops as people who can. This isn’t necessarily a read cover-to-cover type book after you get far enough in to understand the general Unix Way, but if you haven’t actually implemented C code that does string manipulation, file I/O, network sockets, memory management, threads, safe concurrency with critical sections, etc., then using higher level languages and frameworks is a crutch and you are more likely to make bad decisions. If you know how to do it in C, though, you can use those higher level languages for practicality and productivity, but know what their underlying implementation likely looks like and make correct decisions accordingly. If you’re a Windows Guy… sorry, Unix system programming is just so much more kick-ass than Windows system programming. (By the way, University CS programs that don’t make their Operating Systems students write a Unix shell in C that has program execution, pipe support, stdin and stdout redirection support, and a few other features are really doing their students a disservice.)

The Art of Computer Programming, by Donald Knuth. This is on my list because it simply has to be. This is the definitive monograph on the subject. I’m the first to admit that I haven’t actually read it all. The first few parts of Volume 1 deserve an honest read-through. The rest is great to skim, pick out topics you’re interested in to read more in depth…but really just the overall exposure is what you’re after. Getting used to thinking about algorithms that way Knuth talks about them. These volumes are perhaps the least practical books in my entire technical library, but at the same time if you put some effort into reading them–or parts of them–you will come away smarter than you used to be. The information density in these books is impressive. And it turns out they do have a practical side, too… I have, on numerous occasions, wanted to do something and just picked an algorithm straight out of one of the books to implement.

It’s great if candidates–or any developer–have also read practical books related to the languages, frameworks, and trends that a company they are working for uses. In fact, reading the three books I listed does very little to prepare a developer to work in a real-world software company. However, without understanding these three books (level of understanding required is in the order that I listed them… fully understand The C Programming Language, have a good handle on Advanced Programming In The Unix Environment, and get what you can via osmosis with The Art of Computer Programming), I honestly believe developers are at a disadvantage, and it will show in their software.

Automating Oracle Database Creation

Why?

I went through some time when, for some reason, I found myself creating lots of new Oracle databases on various systems. These databases were primarily on remote Solaris systems (because, as always, I don’t believe in running Oracle on Windows!).

The “obvious” way to create databases is with the Database Configuration Assistant (DBCA). However, I was unsatisfied with this approach for several reasons:

First, DBCA is a GUI tool and I only connect to the database server with SSH. To use DBCA, I ran a local X server and used X11 forwarding over SSH. Technically effective, but X over anything other than fast local network is barely usable.

Second, I wanted to provision databases that were as “lean and mean” as possible. The databases were usually for development or quick testing of different applications, and most applications didn’t depend on too much Oracle-specific functionality or advanced Oracle features. The databases that come out of DBCA always seemed a bit bloated to me. Furthermore, for applications that do use specific Oracle features (such as the embedded Java runtime, Streams, CDC, etc.), I want to know specifically what needs to be added to the base database to enable the functionality rather than just relying on a install-everything approach.

Finally, I believe anything you need to do server-side to deploy applications should be automated (or at least support the ability to automate the tasks). Creating the databases using the same automated script across my environments is much lower risk than remembering to click all the same settings in a GUI tool when I move through environments. Another aspect of this is that I was finding databases I created using DBCA on different systems tended to have variances in where various directories were created depending on how Oracle was installed. Over time I’ve come to like a particular scheme for organizing multiple databases on a single server, so by scripting the process I can go to any server that I’ve created databases on and know exactly where to find everything.

With all of that in mind, I went in search of the deep dark secrets of creating Oracle databases through PL/SQL instead of DBCA. This really boils down to three steps:

  1. Prepare to create the database
  2. Create the database
  3. Run post-creation scripts

Preparing to create the database really just involves making the directory structure you want and preparing the Oracle parameters file for the database you are going to create.

Then, creating the database is the big SQL statement to actually (duh!) create the database.

And finally, you need to run the SQL scripts to create the initial schema objects. This is also the first good opportunity to migrate the pfile to an spfile.

How?

The approach I took is to write a shell script that creates the directory structure and outputs the SQL and shell scripts to create the individual database (in the database’s admin directory so that the creation scripts used for a particular database are tucked away in that particular database’s directory structure for future reference).

The “creation script creator script” has some parameters you can change to indicate where Oracle is installed, and then of course the rest of the script builds paths based on how I normally set things up and like to see it organized. Very briefly, Oracle product is installed under /u01 and all of my data files go under /u02/oradata/database and recovery files go under /u02/orarecovery/database. I throw two control files under /u02 and stash one under /u01 on the theory that /u01 and /u02 should be different LUNs. Any other administrative stuff goes under /u01/app/oracle/admin/database.

The SID of the database you want to create is the only command-line parameter to the script. If you want anything else to be different, you need to edit the script ahead of time. If you don’t change the template for database creation and parameter file creation in the script, you’ll end up with a character set of AL32UTF8 and the database configured to use about 512MB of RAM on the system.

So without further ado, here’s the script I use:

#!/bin/sh

DB_SID=$1
DB_DOMAIN=mattwilson.org

ORACLE_BASE=/u01/app/oracle
ORACLE_HOME=${ORACLE_BASE}/product/10.2.0/db_1
ORACLE_ADMIN=${ORACLE_BASE}/admin/${DB_SID}

DATA_PRIMARY=/u02/oradata/${DB_SID}
DATA_SECONDARY=/u01/app/oracle/oradata/${DB_SID}
DATA_RECOVERY=/u02/orarecovery

# Create admin directories
mkdir -p ${ORACLE_ADMIN}
for x in adump bdump cdump udump scripts
do
        mkdir ${ORACLE_ADMIN}/${x}
done

# Create data directories
mkdir -p $DATA_PRIMARY
mkdir -p $DATA_SECONDARY
mkdir -p $DATA_RECOVERY

# Create init.ora file for instance
cat - > ${ORACLE_ADMIN}/scripts/init.ora << __EOF__
db_name = $DB_SID
db_domain = $DB_DOMAIN

db_block_size = 8192
undo_management = auto
undo_tablespace = undotbs1

control_files = (${DATA_PRIMARY}/${DB_SID}_ctrl_01.ctl,
                 ${DATA_PRIMARY}/${DB_SID}_ctrl_02.ctl,
                 ${DATA_SECONDARY}/${DB_SID}_ctrl_03.ctl)

background_dump_dest = ${ORACLE_ADMIN}/bdump
core_dump_dest = ${ORACLE_ADMIN}/cdump
user_dump_dest = ${ORACLE_ADMIN}/udump
audit_file_dest = ${ORACLE_ADMIN}/adump

db_recovery_file_dest = $DATA_RECOVERY
db_recovery_file_dest_size = 2147483648

sga_target = 402653184
__EOF__

# Create database creation script
cat - > ${ORACLE_ADMIN}/scripts/create.sql << __EOF__
connect / as sysdba
set echo on
spool ${ORACLE_ADMIN}/scripts/create.log

startup nomount pfile=${ORACLE_ADMIN}/scripts/init.ora;

CREATE DATABASE "${DB_SID}"
MAXINSTANCES 1
MAXLOGHISTORY 1
MAXLOGFILES 16
MAXLOGMEMBERS 3
MAXDATAFILES 100
CHARACTER SET AL32UTF8
NATIONAL CHARACTER SET UTF8
DATAFILE '${DATA_PRIMARY}/system01.dbf'
        SIZE 128M
        AUTOEXTEND ON
        NEXT 128M MAXSIZE UNLIMITED
        EXTENT MANAGEMENT LOCAL
SYSAUX DATAFILE '${DATA_PRIMARY}/sysaux01.dbf'
        SIZE 128M
        AUTOEXTEND ON
        NEXT 128M MAXSIZE UNLIMITED
UNDO TABLESPACE "UNDOTBS1" DATAFILE '${DATA_PRIMARY}/undotbs01.dbf'
        SIZE 128M
        AUTOEXTEND ON
        NEXT 16M MAXSIZE UNLIMITED
DEFAULT TEMPORARY TABLESPACE TEMP
        TEMPFILE '${DATA_PRIMARY}/temp01.dbf'
        SIZE 32M
        AUTOEXTEND ON
        NEXT 8M MAXSIZE UNLIMITED
DEFAULT TABLESPACE USERS DATAFILE '${DATA_PRIMARY}/users01.dbf'
        SIZE 64M
        AUTOEXTEND ON
        NEXT 64M MAXSIZE UNLIMITED
LOGFILE GROUP 1 ('${DATA_PRIMARY}/redo01.log') SIZE 64M,
        GROUP 2 ('${DATA_PRIMARY}/redo02.log') SIZE 64M,
        GROUP 3 ('${DATA_PRIMARY}/redo03.log') SIZE 64M;

@?/rdbms/admin/catalog.sql
@?/rdbms/admin/catproc.sql

connect system/manager
@?/sqlplus/admin/pupbld

connect / as sysdba
shutdown immediate;
connect / as sysdba
startup mount pfile=${ORACLE_ADMIN}/scripts/init.ora;
alter database archivelog;
alter database open;
create spfile='${ORACLE_HOME}/dbs/spfile${DB_SID}.ora'
        from pfile='${ORACLE_ADMIN}/scripts/init.ora';
shutdown immediate;
startup;

execute utl_recomp.recomp_serial();

exit;
__EOF__

# Create run script
cat - > ${ORACLE_ADMIN}/scripts/create.sh << __EOF__
#!/bin/sh
ORACLE_HOME=$ORACLE_HOME
ORACLE_SID=$DB_SID
export ORACLE_HOME ORACLE_SID
\$ORACLE_HOME/bin/sqlplus /nolog @create
__EOF__

chmod +x ${ORACLE_ADMIN}/scripts/create.sh

# All done!
echo -------------------------------------------------------------
echo Ready to run create database script.
echo Go to ${ORACLE_ADMIN}/scripts
echo Then run create.sh in that directory.
echo -------------------------------------------------------------

Just save that as something like create-setup-script.sh, make it executable, and you’re all set!

IT Mergers and Acquisitions

My business unit was recently acquired by another company, along with all of us employees. So now I work for a new company who is going to keep us all in Portland to establish a west-coast office (they’re back east).

The upshot of all of this is that the IT guys at my old company are all staying with the old company, so the west coast office of my new company has a long list of IT infrastructure and support needs, but doesn’t have any local IT staff. Thus, I now have two full time jobs! Keep up with my normal customer consulting duties as well as make the IT transition happen.

Truth be told, I am having fun. It’s been several years since I last unboxed and configured new Cisco gear, set up new file and print and directory servers, got into wiring closets to patch drops, etc. This particular case is interesting because it’s not just an outright merger of two companies–there are really three parties involved since it’s just the sale of a business unit: Company A, the group of people and resources that are moving from Company A to Company B, and Company B. At the moment the group in the middle is very dependent on IT resources at Company A, some that are clearly only useful to the business unit and can move as-is, and some that are shared so can’t just move over with the people. And we’re also trying to involve people in Company B’s operations as soon as possible, so we’re all accessing resources in both networks as the disentangling and migration is happening.

At heart I’m really a systems guy–server operating systems, networks, the hardware it all runs on, etc. (but please, leave the client desktop hardware and software to someone else!)–so it’s nice to get back into it for a little while with real production systems instead of my little Solaris test lab at home. I’m lucky that when I was an intern at Intel we had a lab with a few Cisco Catalyst 6500-series switches and Cisco 7000-series routers, and before they actually needed to be deployed the network guy let me and some other interns loose to play with them and learn all about how Cisco gear works. If not for that, I probably would be totally lost getting our new layer 3 core switch up and running, but surprisingly all of my IOS knowledge tucked away in long term memory has bubbled back up to the top of the stack pretty quickly. Coupled with Eric’s excellent checklist for new Cisco switch and router setup, I have our new network ready to start migrating services and eventually cut off all connectivity to the old company.

Apple, why do you do everything you can to keep me from buying another Mac?

I’ve had a 20″ Core Duo iMac for a while now. It’s been a good machine, Mac OS X is decent to work with for what I do at home, the display looks nice… really no major complaints except: it only supports 2GB of RAM.

Yeah. This is a machine purchased in June 2006, and it’s only expandable to 2GB. All of my PCs that are still around, which were purchased before the iMac, can hold at least 4GB of RAM. And they were all less expensive than the iMac.

Anyway, this iMac could easily be just fine for me for another couple years if only it could hold more RAM. 2GB just isn’t enough, and there’s something about the way Mac OS handles memory management that is horribly bad (this is something I noticed both with this iMac and my original Mac Mini before it). Both of my machines at work, one running Windows XP and one running Linux, also only have 2GB of RAM but seem to be able to handle much more of a workload before performance starts degrading.

In any case, 2GB of RAM just isn’t enough for a machine running Mac OS X. So I’m thinking, you know, I shouldn’t have to send loads of cash to Apple for a whole new machine just because they chose ridiculously memory-limited motherboards, but the performance of this machine is just killing me sometimes. My options with this machine are limited, so…

I head over to Apple’s site to look at the specs on the latest iMacs. And guess what: they’re already setting me up to have to buy another one in a couple more years. The whole iMac lineup is limited to 4GB of RAM! This from a company that so loudly boasts about the 64-bitness of their operating systems. It’s like they don’t expect anyone to actually run apps on their computers. I don’t even have high demands: I just want to be able to keep my web browser, iTunes, NetBeans, and a VMWare VM with only 384MB of RAM allocated to it running. If my 2GB machine can’t even handle that (it can’t), how long do I think a machine with only 4GB of RAM will last me? I hate that the only reason this computer won’t last me several more years is because Apple skimped out on how much RAM it can hold.

I can’t bring myself to buy a computer in 2009 that can only hold 4GB of RAM. So looking for other options, I think, “well, maybe I need to go to their ‘pro’ line of systems,” even though consumers deserve more than 4GB of RAM too without buying a new computer again in a couple years.

First up: MacBook Pro. Ignoring that it’s too expensive, I know a lot of people who use these as their main machines with a monitor and keyboard plugged in at the desk. Not my ideal scenario, but luckily I don’t need to worry about it: even on a supposedly “professional” machine, the 15″ MacBook Pro is limited to 4GB of RAM. So we can write that off as a very expensive short-term toy like the iMac.

That just leaves the Mac Pro. The starting price of this puppy is $2,800, and that doesn’t even get you a monitor like the iMacs or MacBooks. Sure, it holds as much RAM as you want to throw at it, but seriously, I’m not going to pay $2,800 for a computer.

For the price of an iMac (I’d probably go with the $1,800 one), you really should be able to expand to more than 4GB of RAM. To ask me to jump from $1,800 to $3,400 (remember, I need to buy a monitor with that Mac Pro) just to satisfy the requirement that I’m able to run more than a couple of applications at a time is ludicrous.

Just for comparison, I priced out a system that is much faster, can hold much more RAM than the iMac, comes with a 24″ display, etc. at Dell and the grand total is… $1,200.

So what do I do? I won’t run Windows at home, of course, but I have no objections to using Linux for my main home system, which I was doing before I got back into Macs. I could build a lightning fast box for much less than even the iMac. But I’d rather just stick with a Mac if only it could hold more RAM.

Listen up, Apple: I want to give you my money. Just not $3,400 of it! You’re making it so hard for me to be your customer. Can’t you at least try to keep up technical parity in your consumer line with the competition?

Dear Google: can you please add two features to GMail for me?

For several years, I ran my own server to handle my email. At first it was a fun project, gave me good real-world experience, and provided flexibility that I wouldn’t have had with most hosted options. Procmail and mutt were my friends. Over time, though, it became more of a burden than it was fun to keep up with anti-spam measures, and in the grand scheme of things I just didn’t feel like spending my free time maintaining caring for and feeding a production mail server.

The death knell for my own server was the introduction of Google Apps For Your Domain. Having played with regular GMail in the past, I liked the interface and its threading model, and I buy into the philosophy of searching email archives instead of trying to organize them. For those and other reasons, moving email to Google Apps sounds like a good option, so I set up a test domain and eventually moved mattwilson.org to Google Apps.

In short, I’ve been happy with the service and their spam filter is amazingly accurate. So I’m a happy camper, but there is one area where I’d like to see a couple of improvements: handling email list subscriptions.

I subscribe to several mail lists, and GMail’s searching and conversation threading features particularly shine when reading list traffic. Each list gets its own label and messages “skip the inbox” so I can just go through and read the lists I’m interested in as I have time. But here’s where the problems arise:

First, GMail’s filters don’t allow me to reliably drop messages from particular lists in a particular label (for GMail neophytes, think of labels as folders). For some lists I’ve subscribed to, the only way to identify that I received the message from that list is by looking for a specific header. Unfortunately, I can’t filter based on headers with GMail so the messages from those lists couldn’t be filed correctly. Even for the majority of my lists which I filter based on the list address in the “To” field, I occasionally get messages in the inbox because the list was bcc’ed for the particular message. There’s another header that still identifies the list, but I can’t act on it. So feature request one: I’d like to filter based on headers.

Second, I don’t read every message on every list. My workflow is to click on a label, scan the subject lines, and read the messages that look interesting. This leaves several unread conversations, and in the best case it takes three clicks to mark the remaining conversations as read. If I’ve been on vacation or not reading list traffic for a couple days and the messages expand past the first list screen, it takes more work to mark them as read. So feature request two: while browsing a label, I’d like a “Catch Up” or “Mark All As Read” button right up there next to the Delete button.

GMail is inherently a natural fit for managing an email account that subscribes to mail lists. The search is great, and the conversation interface is wonderful for following threads. With the addition of header-based filtering and a quick way to mark everything from a list as read, it would be truly fantastic.

Cell Phones (and yes, the iPhone)

By now, of course, you know it happened: Apple, Inc. announced the iPhone.

As cool as the phone looked throughout the entire demo, I was upset the whole time (and continued to rant all day…) that it’s a GSM/EDGE device. I am in no way a fan of Verizon Wireless as a company, but the bottom line is that they have the best network (in all measurable areas: coverage area, call quality, call setup time, etc.) in the area I live. EvDO is also significantly faster than EDGE, which for a mobile device such as the iPhone is going to be important. But more on the cell carriers later.

First, the iPhone itself. There’s not much to say other than “drool.” How can you not want one?

Warning: boring digression!

Perhaps a digression is in order: the iPhone announcement comes at an interesting time for me because I recently evaluated—and briefly tried—the switch to smartphone-land. My first attempt was a BlackBerry Pearl with T-Mobile, which had a fabulous web browser but otherwise I wasn’t a fan of the interface and capabilities. RIM has its (admittedly large) niche, but I wasn’t necessarily looking for real-time Exchange integration to be my killer-feature. I was coming from fantastic call quality and coverage with Verizon Wireless, so my experience with the BlackBerry wasn’t quite doing it for me and I switched back to my old phone and old plan.

My other option, then, was the Treo 700p. I’d have to pay an arm and leg for the device, but I used Palms long ago and know I’ll like their PDA functionality, so it was just a question of online data access. Sadly, it was a joke. The web browser (if it could render the page at all) was horrendous compared to the BlackBerry web browser, and the most important feature for me in a smartphone is web browsing. Also, despite being on a data network that is an order of magnitude faster (EvDO) than what the BlackBerry had access to, browsing the web on the Treo was painfully slow. It was clear the whole device was single-threaded at the operating system level and it was just an awful experience. So I’m sure the Treo is fantastic in every other way, but if it couldn’t browse the web decently, why even sell it?

I’m not interested in Windows Mobile-based devices, and I have a huge financial incentive to stick with my current (voice-only) cell phone and plan, so I left off my thoughts of smartphones around the end of November and decided to give the market some time to get better.

End digression! 

Which brings us back to the topic at hand: the iPhone just came along. It looks to be exactly what I want: non-Windows-based smartphone with a fantastic web browser and nice interface. And a mail client that can do direct IMAP or POP3 on top of that (this was a problem with the BlackBerry and, as far as I could tell, the Treo—they each had to proxy IMAP or POP3 stuff through the wireless provider, I think. This was an extra charge with T-Mobile and I don’t know how Verizon handled it. I want the phone to make a direct TCP connection to my mail server to check mail!).

I want one.

But… there are snags:

  1. Cingular??? Puh-leeze. They are the worst carrier (call quality/coverage/dropped calls) in this area from everything I’ve seen. At least Apple could have gone with T-Mobile to throw in the “hip and cool” angle.
  2. GSM/EDGE? This one is understandable (sadly), but still not what I want. CDMA/EvDO is just plain better, if for no other reason than EvDO is truly broadband-like speeds and EDGE isn’t. The international market is almost exclusively GSM, though, which is why this decision is understandable. I don’t know much about higher-speed GSM data technologies but we’ll have to see how quickly Cingular builds out their network with better tech and if Apple follows with a matching phone.
  3. This is a very expensive setup. The phone is very pricey and really isn’t a suitable iPod replacement (8GB in the most expensive model which is a bit of a joke for their first “widescreen video iPod”) so you can’t use the “well you’re getting a phone and an iPod for the price of one” argument. You will still want to buy the real widescreen video iPod when it comes out, so budget another few hundred bucks for that. Also, I don’t think most people realize how much an unlimited data plan costs: expect your cell phone to double if you have a regular 450-900 minute a month plan. In the Cingular case, unlimited data looks to be $45/month on top of your voice plan.

Points 1 and 2 aren’t likely to affect the mass market, I just don’t like them. Point 3, though, is interesting to me. What market is Apple going for with this phone? I don’t have any data on this, but I would guess that the majority of the cell phone accounts that have the extra $45/month data plan are corporate lines of service. There’s nothing out there yet indicating Apple has any kind of over-the-air Exchange integration story for the iPhone, which will prevent its adoption as a replacement for most of those corporate devices currently tied to data plans. That will still leave lots of people who are interested in doing this sort of setup on their own (like me), but this isn’t exactly something like an iPod where Mom-and-Dad can sink a one-time cost to buy the device and the kid is happy. Will this be a compelling device without a data plan? Perhaps. Is part of the Cingular/Apple deal a special service plan to get people on board? Perhaps. There are different data plan options (most BlackBerrys are on special BlackBerry data plans, with additional services like IMAP/POP3 mail checking requiring an additional charge), and Cingular looks to have a web-browsing only plan for certain smartphones, but in the case of the iPhone that gets back to the question of direct mail client connections versus proxying through some webmail service.

But forget all that: my big question about the iPhone: what does “runs Mac OS X” mean? It sure doesn’t mean that it’s literally the same operating system distribution that runs on my desktop machine. I suspect it does mean there’s parts of Darwin underneath with some key APIs to make it look like MacOS X for development purposes. (Which segues into the next question: what does developing for an iPhone look like? New XCode module? Will there be a simulator? etc…)

On that note, potentially show-stopper (for me, at least, not most people) news I ran across during my iPhone-news-roundup here at the end of the day: is it true that there will be no third-party development for the iPhone? This seems to be confirmed by another source on the show floor.

Anyway, at the moment, I want one. We’ll see what’s happening in the second half of this year.

Oh, and as promised, my quick thoughts on cell providers in the Portland, Oregon area:

  • T-Mobile. I really like T-Mobile because they have great customer service and the best plans/prices. The downside is limited coverage area and GSM/EDGE.
  • Cingular. Good luck actually getting through a complete call with someone and having both parties actually be able to understand each other the whole time! If you could even make good calls, it would be unfortunate that it’s GSM/EDGE.
  • Verizon Wireless. Pure evil. They cripple their phones so that even if the phone is capable of (for example) sending pictures you took to your computer via BlueTooth, that feature is disabled so you have to use Verizon’s $0.25-per-picture over-the-air picture deliver service. There was a class-action suit against them because of this and folks got new phones, but unfortunately this didn’t result in Verizon changing the practice of crippling phones going forward, they just added more fine print to cover themselves from future lawsuits about it in the future. The most mega of the mega corps when it comes to cell phones. BUT (and this is important) they have the best network in terms of coverage, reliability, etc. They are also CDMA and have great EvDO service around the country. At the end of the day I’m not paying my cell phone company to let me take pictures with my phone, I’m paying them to move my voice and data. It’s incredible how much better Verizon does this than the other carriers I’ve dealt with, so…sadly…Verizon gets my business.
  • Sprint/Nextel. Irrelevant. (yeah, I know, harsh! But at the moment, they are. Come on, you go to Radio Shack to buy them. That can’t be a good sign!)

That’s it for now. As I said, I’ll be curious to revisit the iPhone after the first round of people gets them and takes them for a spin.