Saturday, April 30, 2005

Java vs .NET

* Companies, focused on a more robust and secure solution that will manage more data, tend to choose J2EE.
* Companies that need a solution faster, and with less business complexity, tend to choose .NET.
* .NET has an edge in client-side development because more tools are available.
* Sometimes the challenge can be finding developers with skills in both platforms or retraining Java developers to use .NET or vice versa.

Endless debates swirl around the technical merits of Microsoft’s .NET platform vs. Sun Microsystems’ J2EE. Both companies tout their products’ scalability, security, interoperability, speed, support for other products and more. But what about the business case for either platform? What sorts of companies are using which platform for what, and what reasons are behind the platform choices that firms make? Is the balance of power between these two powerful platforms shifting?

Short answers are dangerous when it comes to discussing J2EE and .NET, but in terms of which platform lends itself better to which sort of project, research tends to indicate this: Companies that are focused on a more robust and secure solution that will manage more data tend to choose J2EE. Companies that need a solution faster, and with less business complexity, tend to choose .NET.

A Forrester Research study from September 2004 in which 322 software decision-makers were questioned reports that J2EE is stronger than .NET in industries such as utilities, telecom, finance and insurance. .NET, on the other hand, is stronger in manufacturing, retail and wholesale, media, business services and the public sector. The report’s author, Randy Heffner, writes that “Firms that spend a higher percentage of revenue on IT are more likely to do the majority of development on J2EE. And .NET is more often the focus at companies facing weak business climates.” The report also reported that more than half of the surveyed firms were using both J2EE and .NET.

According to Gartner analyst Mark Driver, that’s the reality—most large organizations are using both platforms. In fact, he rarely encounters a company running only one. And although he sees plenty of momentum right now around .NET, Driver says that’s simply because it’s a newer platform. “There’s a lot of hype around it,” he says, while Java isn’t as exciting because it’s no longer new. The Java programming language has been around for about 10 years; Sun first introduced J2EE in mid-1999 as a platform-independent, Java-centric environment. Microsoft, in turn, announced .NET in 2000 as a major new platform and infrastructure strategy for the company.

For the most part, Driver says, Gartner tends to see J2EE and Java used for larger projects, with companies who have “a heavy need for multiple tools from multiple vendors, and the ability to deploy on non-Windows platforms.”

Also, vendor commitments heavily influence the choice of platforms, Driver points out. Large companies tend to run mainframes somewhere in the mix, which means they work with companies such as IBM, BEA, Sybase, SAP and Oracle. “Companies are going to use what those vendors are using, which is Java,” Driver says.

Eric Newcomer, CIO of IONA Technologies, agrees. Particularly in large companies, platform decisions in the past tended to be made on a departmental level. In fact, IT environments have grown up that way, Newcomer says, leading to today’s often-fragmented enterprises. IONA approaches that fragmentation by making products that allow business applications and middleware from various vendors work together. Customers tend to be fairly large, including financial, telecommunications and government systems.

As company size increases, so does the likelihood of seeing multiple platforms in use, agrees Joe Fernandez, director of product marketing for Web test products at Empirix. Fernandez’ company sells quality assurance and testing software for both J2EE and .NET applications—many customers are Fortune-100-sized companies. “As you get into larger businesses,” says Fernandez, “there’s an increasing likelihood [of seeing] both Microsoft and Java technologies within one organization.”

Along with fragmented departmental decisions, a large number of applications at large companies have come through acquisition, Fernandez points out: “The IT infrastructure comes together in lots of different ways.” For the most part, though, he believes large companies are still running Java, although he sees Microsoft moving its way up the food chain.

Selecting the right platform

Regardless of what others are running, when it’s time to make a platform decision for a new project, you’re still left with a tough decision. Simple categorizations like Java-big and .NET-small might apply, but only to a company starting from scratch without any existing software or history, points out Brian Lyons, CIO of Number Six Software.

Number Six specializes in mature software development techniques and large projects and companies, including various levels of government. (Lyons has put together a presentation brimming with resources on both sides of the debate, which are available for download at www.numbersix.com/csdi/documents/ThePlatformWarsx.pdf.)

Starting with a clean slate, a company really could choose a platform based simply on “the business people knowing what their significant business drivers are, and [the technical people] knowing what the tech drivers are,” Lyons says. More realistically, of course, companies must weigh a number of factors in choosing a new environment, including current platforms and tools, overall cost, skill sets of current staff, future estimates of growth, technical staff choices and management preferences. Since his company typically enters the picture when there’s already some infrastructure in place, Lyons says, “We look at business and technical drivers, as-is components, and help make a decision [on a platform].”

In general, there’s some agreement on the broad project categories that each platform is best for. IONA looks at it as an ease-of-use vs. complexity tradeoff, Newcomer says. “I think that the .NET framework and Microsoft, in general, have grown their business around the ease-of-use idea... They make their tools easy to work with. For simple, GUI-intensive [projects], the tools from Microsoft really have the edge.” On the other hand, he suggests, “for more complex coding, where it’s not as easy to access the features of the language, you may prefer to use Java... Perhaps you want to tune it for performance and for high scalability.”

Many factors to consider

Gartner’s Driver points out that many considerations should play into the platform decision. His advice includes carefully considering the product development lifecycle. If you’re looking at a three to five-year lifecycle, he says, where time to market is dominant, .NET might be a better choice, other things being equal. That’s because in shorter development cycles in which upfront costs are dominant, Microsoft can be a plus. “For ease of use and produce-ability, Visual Studio is a very, very nice development environment.” Although there are no absolutes, he emphasizes, Visual Studio can mean that .NET is a faster and cheaper choice for shorter development cycles.

On the other side are the larger costs and longer development time of a five- to 10-year lifecycle, or projects with more than 500 concurrent users and heavier legacy integration features. In those cases, Driver says, consider the additional flexibility of being able to change course over time with less impact. That points to a Java platform, because “I can switch middleware and I can switch tools, not at zero cost, but at less cost.” All of these options, he points out, presume a heavy consideration of return on investment and total cost of ownership.

On the other hand, if you’re already mostly a Microsoft shop, much of this simply may not matter. “If you’re Microsoft-centric,” Driver says, “there’s not a whole lot of reasons to look behind .NET.”

Gauging developer availability
Along with many other considerations, one element to weigh in choosing a project platform is what developer resources are available now, at what cost, and how that might change in the future.

In an interesting geographical split, Number Six Software’s Lyons sees more J2EE and Java in use on the East Coast and more .NET in the Midwest. That’s partly because the federal government is still heavily vested in Java, which sways the results. It also tells you that if your company is in the Midwest, .NET developers may be slightly harder to find, since there are more work opportunities there for them.

“There’s more Java developers out there [right now] than .NET developers,” Gartner’s Driver confirms, but “in the end, there will be more .NET developers, just because of the sheer number of projects.”

Because Microsoft continues to be the dominant vendor at the lower end, Driver says, that favors a larger number of .NET developers eventually, since there will always be many more small projects than large ones. The low end “is still Microsoft’s business to lose,” Driver says. “It’s like a pyramid—non-mission critical stuff underneath and enterprise on top… After all, how many HR systems does a company have or need?”

“From what I’ve seen,” Empirix’ Fernandez says, “it’s still easier to find Microsoft developers. Microsoft has a large developer community and is doing a good job of training people on the latest [technologies].” Another observation, Fernandez says, is that when J2EE first came on the scene several years ago, “we had a harder time finding Java developers… Java has now been around for [a while]. So that developer community has grown and is a little more accessible.”

Who you can find also depends, of course, on what skill level you’re looking for. At Number Six, Lyons says, the company needs high-end, sophisticated developers for both platforms. “At that level, I can’t say that there’s any difference,” Lyons says, “either in salary or in ease of finding [candidates].” If you’re just beginning a .NET project, Lyons says, “you might find people on the Visual Basic side, but [they’re] not really enterprise designers.”

At IONA, Newcomer echoes that sentiment. “We need high-level developers who are multi-lingual—C++ and Java, for example.” In general, he says he’s found that Java developers tend to pick up C# programming, which is specific to the .NET environment, very easily. In general, Newcomer says, “Microsoft does a tremendous job of training people,” ensuring that there’s a steady pool of talent available.

Sometimes, the challenge can be finding developers with skills in both platforms, or retraining Java developers to use .NET or vice versa. In what he says is a fairly common type of IONA customer, Newcomer describes a recent scenario at a large bank. “They had all this .NET, lots of [.NET] developers, lots of Java app servers, and now they had to put them all together. But they had few people with skills across both platforms.”

The world is equally divided

Is either platform growing more quickly than the other? Although both Java boosters and Microsoft fans claim growth and dominance, most observers don’t see it. “At IONA, we see the world still roughly divided between Java and .NET developers,” according to Newcomer. “It could be tipped slightly toward one or the other, but our role is to try to integrate these disparate systems.”

One factor to consider on the Java and J2EE side is Linux. “We’re certainly seeing more Linux in the enterprise as a low-cost alternative,” Empirix’ Fernandez says. “It can offer better cost and scalability advantages.” Linux may indeed have an effect on J2EE, Gartner’s Driver says. “As [Microsoft] moves the battlefield higher and higher into the enterprise, they have a guerilla war flanking them—Linux. [And] Linux and Java have a very tight affinity.”

The growth and popularity of Eclipse, the open-source tools framework initiative, might also help by making Java more accessible, Fernandez says. “That’s always been the challenge for Java—the complexity. Eclipse is a platform the Java community rallies around.”

That you can run Java on many platforms is still a big differentiator, Fernandez says. “Microsoft has talked about that, but it’s not a reality yet in any meaningful way.” He mentioned the Mono project as an example, but says that “in terms of released software, we haven’t really seen anything yet.” The Mono project, begun in 2001, is an open-source development platform based on the .NET framework that can run existing programs targeting either the .NET or Java frameworks.

Although it’s technically true that J2EE supports multiple operating systems and .NET supports just one, Number Six’s Lyons finds that an oversimplification with little grounding in fact. “I think it’s an emotional issue,” he says, because “people like to feel [that they’re] not locked in.” But the flexibility espoused by Java proponents is largely an illusion, he says, since changing operating systems isn’t trivial on any platform.

“If you’re running [IBM] WebSphere or WebLogic,” Lyons says, “it’s not a no-brainer to change. People feel they’re not locked in, but really they are. You’re kidding yourself if you think you can just flip a switch and change.” Available development tools can also be a deciding factor, Driver says, “The development tools available for Java, “do a good job at the more component-based, service-oriented projects, [but] don’t compare to Visual Studio.”

The widest gap with tools, he says, is in client-side development. “That’s much easier in .NET than Java.” On the other hand, “Microsoft tools aren’t so good at formal engineering practices.”

As Microsoft continues to get more serious about enterprise frameworks, larger teams and tools for distributed development, it will start to release better products for those purposes, Driver says, beginning with Visual Studio 2005. Microsoft’s next-generation software development platform has been delayed repeatedly but is expected later this year.

Competition is good

With the market fairly evenly split between the two platforms, and with many companies straddling the divide by running both, a business case can clearly be made for either product in almost any situation.

There’s one fact that nearly everyone agrees on: both .NET and J2EE will continue to thrive in the market, since each has individual technical strengths, the backing of major software companies and huge followings.

The heated competition will continue to push both Sun and Microsoft to augment, support and bolster their competing products. Whichever platform you’re running or contemplating, that kind of competition is good news.




Friday, April 29, 2005

Estimating a project

When a project or collection of projects is in the idea or concept stage, you want to put together a high-level estimate to see whether or not the project is worth pursuing. You typically do not want to spend too much time working on a detailed estimate at this point, since you do not know if the idea is a worthwhile. Basically, you just want to know the relative magnitude of the effort. While you may be asked to provide a high-level estimate of the cost, the business people are also struggling to try to understand and quantify what the benefits of the project will be.

The most accurate way to estimate a project is usually to build a work breakdown structure and to estimate all of the lowest level individual work components. This is a bottom-up approach. It is also the most time consuming and is not appropriate for the initial estimating that you do early on in the funding and prioritization process.

Instead, you will want to utilize a top-down approach, trying to gain as much estimating confidence as possible while also taking as short a timeframe as is practical. To give you a few examples, the following are all top-down techniques that should be considered. Depending on the project, you may find that one or more techniques will work especially well. If you think the effort is large enough to be considered a program (collection of projects), then you need to take your best guess at breaking it up into a corresponding set of projects and then estimate the projects at a high level.

Partial Work Breakdown Structure (WBS)

In this approach, you would start building a traditional WBS, but you would only take it down one or two levels. At that point you would estimate the different work components, using your best guess, or one of the other estimating techniques listed here.

Previous History

This is by far the best way to estimate work. If your organization keeps track of actual effort hours from previous projects, you may have information that will help you estimate new work. The characteristics of the prior work, along with the actual effort hours, should be stored in a file/database. You then describe your project in the same terms to see if similar work has been done in the past. If so, then you have a good idea of the effort required to do your work.

Analogy

Even if you do not keep actual effort hours from previous projects, you may still be able to leverage previous work. Analogy means that you describe your work and ask your organization whether a similar project has been done in the past. If you find a match, see how many effort hours their project took and use this information for your estimate. (If the organization does not track actual effort hours, find out how many people worked on the project, and for how long, and then adjust the hours as needed.) Analogy is similar to the Previous History except that in the Previous History technique you have some structured method to compare historical projects to the one you are estimating. In the Analogy technique you do not have all the facts, so you are relying instead on comparisons with prior projects that seemed "similar".

Expert Opinion

In many cases you may need to go to an internal or external expert to get help estimating the work. For instance, if this is the first time you have used a new technology, you may need the help of an outside research firm to provide information. Many times these estimates are based on what other companies in the industry are experiencing. You may also have an internal expert who can help. Although this may be the first time you have had to estimate a certain type of project, someone else in your organization may have done it many times.

Parametric Modeling

To use this technique, a pattern must exist in the work so that an estimate of one or more basic components can be used to drive the overall estimate. For instance, if you have to implement a package in 40 branch offices, you could estimate the time and effort required for a typical large, medium, and small office. Then, group your 40 offices into buckets of large, medium, and small. Finally, do the math to estimate the entire project.

Ratio

Ratio is similar to analogy except that you have some basis for comparing work that has similar characteristics, but a larger or smaller scale. For instance, you may find that the effort required to complete a software installation for office A was 500 hours. There are twice as many people in the B office, which leads you to believe it may take 1000 hours there.

Estimate in Phases

One of the most difficult aspects of estimating projects is that you do not know exactly what work will be needed in the distant future. To reduce the level of uncertainty, you can break the work into a series of smaller projects and only give an estimate of the most current project, with a more vague estimate for the remaining work. For instance, many times you can provide a high-level estimate for an analysis phase, during which you will gather business requirements. After you have the requirements, then you will be in a position to estimate the rest of the project (or at least the next major phase). At that point, management can again do a cost-benefit calculation to determine if it makes sense to proceed with the rest of the project.

Summary

When you want to do a quick estimate of project cost, you want to use some type of high-level, top-down approach. Depending on the characteristics of the project and the type of information you have available, these approaches can actually be very accurate. Worst case, they should at least give you a decent ballpark estimate. From an expectations standpoint, this type of high-level estimate should be 25% to +75% accurate. That is, if you estimate the cost of the project to be $100,000, you would expect the actual cost to be in the range of $75,000 to $175,000. If your management or customer would like more accuracy than that, they need to give you more time to allow you to uncover more details, or lay the work out at a lower level.




Wednesday, April 27, 2005

DB Atrophy

Ok, looks like I'm diverting too much from my daily job activities. So, here is something most of my readers will enjoy (yes, I'm talking to you programming fans...)

I feel we, the software development community in general, are suffering from a disease called "database atrophy" and most of us don't know it.

We are so dependent on the services our relational database servers (RDBMS) provide, that many of us could not live without them.

I believe this RDBMS dominance is causing us to yield but a fraction of the power in our computers.

Object-oriented languages have been around for decades and the next few years, at least, will clearly be OO language dominated. There is no serious contending platform besides Java and .NET today.

Still, the promises of OO, such as high-quality and reuse, are not fulfilled.

With rare exceptions, the OO culture simply does not exist. People are still uncapable of using OO to drive complexity out of their business logic.
And the reason for that is RDBMS dominance. There is absolutely no way of doing OO with your live data in an RDBMS.

10 years ago, people thought using "data-aware" GUI "objects" coupled to their database was using OO. Today, many people think that using an Object-Relational Mapping (ORM) tool to map dumb data-objects to database records, is using OO. :(

ORMs such as Hibernate, Castor and Toplink are central players in this depressing context of ours. On the one hand, they provide a palliative in the absence of true OO but, on the other, they actually hamper the community by illuding it and giving OO a bad name.

But I believe a brighter future is inevitable:

  • OO design patterns are starting to kick in.
  • IDEs are starting to invest in decent OO design support rather than database-coupled "RAD" development.
  • Open source project are cross-pollinating best OO techniques such as refactoring, test driven development, implementation independence, etc.
Feel free to participate.

This post is dedicated to denouncing the symptoms and causes of database atrophy and providing a vision of what we can do in full health. :)

And as my experience has shown me to behave, I should not only pinpoint the problem, I should also provide a solution, here are 2 projects that can cause your brain to kick out that atrophy you have been living with.

db4o - "Objects are here to stay"
Prevayler - "Persistence is Futile"




Tuesday, April 26, 2005

My Own WiMax Hot Spot

I know, I know. Am I not supposed to be the geekiest geek in my circle of friends? Am I not supposed to have all the stuff I usually talk about? Well, I don't. Until a couple of weeks ago, I finally got broadband in my home and finally set up a wireless network. Finally I have access to the 'net from my home's thinking chair!


WiFi is such a big leap into the lives of everyone and while not broadly installed in Mexico's homes, it is starting to carve its path. In this line, this week, Intel announced that it has begun shipping its first WiMax wireless networking chips to OEM manufacturers. Other manufacturers will soon follow, and the hype level will increase accordingly. But there is also plenty of noise coming from pundits saying that WiMax is a long time from being a major factor. I'm in the middle on this. The impact of WiMax devices is probably a couple years away, but the impact of WiMax on the market can't be over-estimated, and that's already begun. The big winners in this, oddly, are probably BitTorrent users. If only in this way, WiMax will shortly change our world.

WiMax, if you don't already know, is the IEEE 802.16 wireless networking standard that has people excited because it will support high data rates over long distances, sometimes up to 30 miles. Think of WiMax as long-range WiFi. From a logistical standpoint, WiMax beats the heck out of WiFi because you can plop an access point into the middle of town, feed it with a DS3, and have the whole town broadband-ready in a few days. That's the dream, and I am sure it will be eventually realized.

The reality behind the dream is that WiMax operates in several frequency bands, some unlicensed and some licensed. Next year, for a few hundred dollars, you'll be able to buy a WiMax access point operating in the 5.8 Ghz band and offer service to your neighbors -- and ONLY your neighbors. At the power levels authorized for unlicensed use, 5.8 Ghz WiMax is not going to offer significantly higher performance than does 802.11a today. Unlicensed WiMax will be a short-range service. In order to go those 10 to 30 mile distances, you'll need to operate at a lower frequency with more power, which means using licensed spectrum, which means paying real money.

So a WiMax metropolitan area network is likely to owned by some concern with deep pockets, not by you or me. In that way, WiMax is not at all like WiFi. The big wallets are already coming into play as telcos, mobile phone companies, long-distance phone companies, and others start grabbing for those local frequencies. What we'll eventually see are two to three big players in most markets, and we'll still be sending someone a check every month.

But this is NOT a bad thing. WiMax will provide broadband competition in a way that WiFi never could. While WiFi was always at best a broadband extension, WiMax can be a broadband alternative to DSL and cable modems. This third player will lead to more competition and lower prices. That's why it is good.

Competition has an impact on more than just prices, though. Service providers can also compete on, well, service. We're seeing that right now in the U.S., where cable companies are jacking-up their Internet speeds in an effort to keep customers from going to DSL, just as telcos are installing fiber-to-the-home to steal video customers. Adding a third major competitor in the mix will only accelerate this trend. And if Power Line Internet becomes a reality it, too, will push service levels.

We've seen this before in mobile telephones where competition has driven down prices and made most services part of the package. We'll see more of that, too, with Internet service.

Which brings me to BitTorrent, which apparently is sucking up 30 to 40 percent of all Internet bandwidth though most Internet users (not you -- those other people) have never heard of it. BitTorrent is an Open Source peer-to-peer file-sharing application that is popular for distributing huge video files because it cleverly uses the assistance of your client computer to help redistribute to other downloaders those parts of the file that you have already received.

The powers that be -- ISP's, movie studios, etc. -- hate BitTorrent. The ISPs hate it because of all that bandwidth sucking and the movie studios hate it because they think Bit Torrent is being used to steal their property.

Now let's look forward two to three years. Broadband will be pervasive by then and in nearly every city, users will have the choice of DSL, cable, WiMax, and possibly Power Line Internet service. Average speeds may be slightly higher, average bills will be slightly lower, and the market will be perfectly poised for video-on-demand (more properly download-on-demand) to replace much of broadcast and cable television as we presently know it. And when that happens, when the movie studios have finally realized that they can cut out the networks and the cable companies and sell or rent directly to you and me for less money but more profit, the way they'll do that is by embracing BitTorrent.

Why not? BitTorrent drops the studio cost of downloading movies from $0.50 or so to nothing at all. BitTorrent is more reliable and scalable than any movie studio web site will ever be. The ISPs just have to come around.

That may be easier than it first appears. ISPs hate BitTorrent right now because it costs them real money for real bandwidth. But they, too, are planning to offer video services and BitTorrent is really, really good for that. In the super-competitive broadband ISP environment of two to three years from now, I'm predicting that the ISPs will come to realize that BitTorrent is actually their friend.

What bugs ISPs right now is that they are paying a lot of money for the bandwidth being used by BitTorrent. But what is key to understand is that the bandwidth the ISPs feel sick about is INTERNET bandwidth, not the bandwidth of their own networks. If BitTorrent traffic is grabbing 30 percent of total Internet bandwidth, that means an ISP is paying 30 percent of its Internet bill for BitTorrent traffic. But remember that ISPs over-sell their Internet bandwidth by 100 to 200 times, which means that BitTorrent load might be 30 percent of the backbone connection, but less than one percent of the internal network bandwidth.

There is a solution here and that's to keep most BitTorrent traffic OFF the Internet. Comcast now has more than seven million broadband customers. What are the odds that you could make your BitTorrent download just as fast linking solely to other Comcast customers? For obscure content, sure, you reach out over the Net, but for American Idol, you can get it just as quickly without ever hitting a backbone.

My prediction, then, is that competition from WiMax and other new broadband providers will force ISPs to be more open, that movie studios and others will realize BitTorrent can be an ideal distribution medium, and that ISPs -- by localizing most Bit Torrent traffic -- can make customers happy and save money, too.

We'll see.




Monday, April 25, 2005

Individual Tagging with RFID

Many people who know me and now read this blog have asked me why I haven't written any post about RFID. Being someone that always tries to show the technology wherever I go, here is my first RFID related post. HEB, please don't get mad at me for writing about Wal-Mart :)

In a recent shenanigan at a Wal-Mart store, techno-fraudsters printed out fake barcode labels and, in one instance, a $100 mattress rang up at checkout for the price of a bunch of bananas. But what if Wal-Mart had been in the habit of attaching RFID tags rather than barcodes to mattresses and bananas? Would the emerging wireless technology have saved the day?

The answer is "No"—at the moment, anyway. For RFID to work as an antidote to in-store theft and fraud, tagging is needed at the individual item level. And, to be charitable, item-level RFID tagging isn't likely to become much of a reality until the end of this decade—that is, unless we're talking about ultra high-end designer clothes (think Prada), or maybe costly pharmaceuticals (think controlled substances such as OxyContin, a drug marketed by Purdue Pharma as an analgesic).

In current trials at Wal-Mart, Target, U.K.-based Tesco and other large retailers, RFID is being deployed almost exclusively at the palette and carton levels. Essentially, that's because item-level RFID continues to face two humongous hurdles: high pricing and mounting privacy concerns.

Although RFID advocates keep pointing to an idyllic future when RFID tags might cost three to four cents each, even the "passive" variety will still run you more in the neighborhood of 20 to 30 cents today. And in contrast to some of the more costly "active RFID" technology, passive RFID tends to be much more subject to data tampering.

In the low-margin land of mass merchandise stores, it'd clearly make no sense at all to apply a 30-cent tag to a $1 bunch of bananas—or even to a $2.50 greeting card or a $10 tube of sun lotion.

On the other hand, it's already economically feasible to attach the same sort of tag to a four-figure handbag or a five-figure suit—something Prada's been proving quite well at its RFID-enabled showrooms in New York City.

Meanwhile, midmarket apparel stores such as Benetton have also been toying with the idea of item-level RFID. So, too, have retailers such as Tesco, which sells products across a broad spectrum of prices. But privacy advocates have been working hard to squelch efforts in this direction by mounting pickets and threatening boycotts.

For instance, during the spring of 2003, Benetton's previously announced plans to test item-level RFID came to a halt after a U.S.-based group called Caspian (Consumers Against Privacy Invasion and Numbering) threatened a boycott.

Later that year, picketers stood outside a Tesco store in Cambridge, England, protesting the supermarket chain's decision to automatically snap photos of shoppers who picked up packets of Gillette Mach 3 razor blades. The packets had been marked with RFID labels as part of a trial with Gillette.

After the razor-blade test ended in June, Tesco proceeded with an item-level RFID trial of DVDs at its store in Sandhurst, England. Still, Caspian kept urging a worldwide boycott of Gillette products around RFID tagging concerns

I predict that when item-level pricing does reach widespread deployment, it'll happen initially in pharmaceuticals. And in this context, the wireless technology will first come into play more as an anti-counterfeiting measure than as a theft deterrent.

On November 15, 2004, the FDA's Counterfeit Drug Task Force recommended a multilayered approach that includes RFID to help combat drug counterfeiting.

That same day, Purdue Pharma rolled out a pilot program for integrating passive RFID tags into the labels on 100-tablet bottles of OxyContin—a substance with "an abuse liability similar to morphine," according to its manufacturer. The first shipments of RFID-tagged bottles went out later that week to Wal-Mart and H.D. Smith, a big pharmaceuticals wholesaler.

Backing from a regulatory agency like the FDA might help to curb the privacy protests. It will definitely help to spur R&D in the overall area of item-level RFID tagging. And as some of you may recall from Economics 101, as supply of a product increases, prices will fall—or so the theory goes.

So some time after 2007—the timeframe now targeted by the FDA for RFID compliance—more retailers will probably start turning to item-level RFID to protect against theft of nonpharmaceutical items—even on $1 bunches of bananas, and more certainly, on $100 mattresses. But don't expect to see widespread item-level tagging any sooner than that.




Friday, April 22, 2005

Are we near the end of Moore's Law?

In Gordon Moore's original observation, made in 1965, he argued that the number of transistors per integrated circuit increased as an exponential function, doubling about every year. The pace wasn't able to sustain quite this level, but Moore made a downward revision in 1975, saying that they doubled about every 2 years. Some claim that he revised it to 18 months, which, in the past 20 years, has proven even more reliable (Moore's original paper-[pdf]). When this prediction was made, the processor was cost-effective at 50 transistors per chip. Soon after, Intel produced the 4004, the world's first single chip microprocessor. The 4004 contained 2300 transistors, and was shrunk to an eighth of an inch wide by a sixth of an inch long. Today, the Itanium 2 chip contains half a billion transistors, or 229, to look at it in context. Wikipedia has a pretty nice graph of the relevant data.

There is now good reason to suggest that Moore's Law, which has been so reliable for so long, may be on the verge of losing its relevance. Many have suggested that Moore's Law can no longer be maintained because of economic factors or technological limitations. The intent of this post is to show why the opposite is true. I believe we are on the verge of outstripping Moore's doubling time.

Chip manufacturers are confident that they will be able to continue to maintain the pace of Moore's Law for the next decade. As of the fourth quarter of 2004, transistors in microprocessors were a little over 100 nanometers(nm) across (a nanometer is 10-9 meters, or one one-billionth of a meter). If we assume that the transistor gets proportionally smaller in order to maintain chip size, then in 10 years, we would expect the transistor to be 10 nm across, and that the processor would contain 50 billion of them. If the industry leaders are correct, this should be well within our capabilities. But in 2003, several members of the Institute of Electrical and Electronics Engineers, Zhirnov, Cavin and Hutchby, submitted a paper that proposed that we may be about to hit a wall when it comes to scaling electronics.

Their paper, Limits to Binary Logic Switch Scaling--A Gedanken Model [pdf], proposed that switching in transistors is limited to constraints defined by Heisenberg's Uncertainty Principle. The paper used the term "energy barriers" to describe the potential between the gate and the carrier, but no matter how great the potential difference, eventually the tunneling of electrons and holes will become too great for the transistor to perform reliable operations. In short, the two states of the switch would become indistinguishable. This cannot be allowed in a binary system, but it would happen if its size gets as small as 4 nm. Indeed, this would be the size of a transistor produced in 13 years, keeping strict adherence to Moore's Law.

They add that the heat from these transistors will be very difficult to moderate, because to do so would require somehow diverting the heat produced by this 5 nm device away from the processor. Alternatively, the entire processor could be cooled, which would produce more heat than it takes away.

In addition, there are rising costs for the producers of these chips. From the Wikipedia article

It is interesting to note that as the cost of computer power continues to fall (from the perspective of a consumer), the cost for producers to achieve Moore's Law has followed the opposite trend: R&D, manufacturing, and test costs have increased steadily with each new generation of chips. As the cost of semiconductor equipment is expected to continue increasing, manufacturers must sell larger and larger quantities of chips to remain profitable. (The cost to "tapeout" a chip at 0.18u was roughly $300,000 USD. The cost to "tapeout" a chip at 90nm exceeds $750,000 USD, and the cost is expected to exceed $1.0M USD for 65nm.) In recent years, analysts have observed a decline in the number of "design starts" at advanced process nodes (0.13u and below.) While these observations were made in the period after the year 2000 economic downturn, the decline may be evidence that the long-term global market cannot economically sustain Moore's Law.

On what basis then could it be suggested that Moore's law could possibly be outstripped by technology? What evidence is there to suggest that we can possibly speed up the pace of electronics advancement better than we have in 40 years of exponential improvement? For this, we should look to some of the current advances in nanotechnology.

Exhibit 1: MIT's Technological Review. This article suggests a way that we may begin to solve the problem of heat dissipation. In the last year, nanoscience has managed to create something that has eluded electrical engineers for many decades. The (5,5) single-walled carbon nanotube (SWNT) is a superconductor at room temperature (a nanotube is defined by a chiral vetor- 5,5 in this case. The dimension is a function of this vector, and knowing something about the chiral vector will provide insight into how the nanotube looks when it is rolled up. This is an example of an armchair configuration). It is 0.55 nm in diameter, and has already been used in an experimental transistor. Unlike any other transistor currently being produced, The SWNT can take on properties of both P- and N-type semiconductors simultaneously, depending on the gate voltage (more information on nanotube electronics).

Exhibit 2: Quantum Computing. Why be content looking for smaller ways to perform the same old processes? There are now a number of alternative processors starting to move into the realm of feasability. At Almaden Research Center, the seven-qubit (quantum bit) quantum computer has already managed to run Shor's factoring algorithm. Take a standard computer with 'n' bits, and a quantum computer with 'n' qubits. If the two computers can process a bit with the same speed, the quantum computer can run through 2n states in the same amount of time it takes the conventional computer to process just one.

The DNA computer is also worth mentioning here. The distance between levels on a DNA chain is 3 nm, and a typical human chain is a couple of centimeters in length. That means each DNA chain is capable of storing 7 million DNA-bits, each of which is capable of 4 different "states," adanine, thymene, cytosine or guanine. That's 47,000,000 possible states, and during cell division, this gets processed in just over an hour!

Exhibit 3: The human brain. According to the linked article, the human brain should have the capacity to process 100 million MIPS (million instructions per second) or 100 trillion instrutions per second. From SIGNAL magazine,

On an evolutionary scale, current processing speeds of 1,000 MIPS place robots at the small vertebrate level. "A guppy," [Hans] Moravec, [of Carnegie Mellon's mobile robot laboratory] says, adding that besides carrying out their specific functions, autonomous robots are only aware of their immediate surroundings. However, he predicts that increasing processing speeds will bring more capable systems within a decade. Once robots are commercially available in large numbers, many solutions for issues such as hazard recognition will arrive through incremental use and modification. "There is no substitute for field use for learning about problems and solving them," he says.
What this indicates is that computers are catching up fast. If Moore's law holds, then in 30 years, computers will be able to "think" faster than humans. Even before computers overtake the human brain, they may well become capable of improving on their own designs. The possibility of computers eventually rendering humans obsolete is touched on in Vinge's Singularity (original paper).

What these arguments still fail to take into account is the type of human ingeneuity that drives future innovation. There is incentive to revolutionize computing, because if alternative processors catch on, any company still trying to develop conventional microprocessors will quickly be left far behind. Any kind of unforseen breakthrough will shorten this timetable, causing the exponential slope of Moore's Law to accelerate even faster.

So, here it comes. My prediction is that computer processors will improve by a factor of 4 in the next two years. Then, while they approach the limit to smallness, they will slow down and follow a more natural 1.5 year doubling time. Once DNA and quantum computers, or some other revolutionary type of microprocessor becomes an effective replacement to the conventional semiconducting microprocessor, Moore's Law will cease to be an effective predictor of the future of computing.

Foreseeing the Future

Ok, after almost five days out of writing due to some health problems, here is a bit interesting article to read in this month's Scientific American by a group from Rand on decision making given an uncertain future.

The article talks about using simulation to develop "robust" solutions. These are solutions that "perform well when compared with the alternatives across a wide range of plausible futures". The authors uses this method to examine long term environmental regulations, although I sure would like to develop one myself to examine the results of decisions in my own life! Wouldn't you?




Monday, April 18, 2005

Get a Chair and Start Thinking...

I have a designated "thinking chair" in my office. Or more accurately, since I only have one chair, I have a designated "chair position" in my office.

I don't sit in it when someone drops by to talk. I don't take power naps in it. I use it only for thinking.This chair doesn't think for me, but it does speak to me every now and then. If I've gone a few days without sitting in it, its presence subtly reminds me that I'm not devoting enough time to the all-important task of thinking.

When we fail to make thinking a priority, we develop what author Gordon MacDonald calls "mental flabbiness." This may not sound like a life-threatening condition, but some ways, it can be quite dangerous. Here's how MacDonald explains it:
"In our pressurized society, people who are out of shape mentally usually fall victim to ideas and systems that are destructive to the human spirit and to the human relationship," he writes. "They are victimized because they have not taught themselves how to think, nor have they set themselves to the lifelong pursuit of growth of the mind. Not having the faculty of a strong mind, they grow dependent upon the thoughts and opinions of others. Rather than deal with ideas and issues, they reduce themselves to lives full of rules, regulations, and programs."

You can't be an effective leader with a mindset like that—it's just not possible.

Fortunately, there is an antidote to mental flabbiness: making time to think. I realize this can be a daunting assignment for people whose schedules are already bursting at the seams. And yet, when we don't make thinking a priority, we're actually sabotaging our own creativity and success.

Think about it. One of the highest commodities in a person's life is a great idea. A great idea has transforming power. It can take you places you may never have dreamed of going. But great ideas don't come out of nowhere. They begin as thoughts. So it stands to reason that the more time we spend thinking, the more great ideas we'll have.

The good news is that it doesn't take hours of thinking each day to generate ideas and stay in good mental shape. You can accomplish a great deal in a few moments of concentrated, intentional thought.

Let me give you two examples of how this works in my life. Every morning, I devote three minutes to what I call "big-picture thinking." I look at my schedule for the day and ask myself one simple question: What's the main event? Of all the things I'm going to do, of all the people I'm going to see, of all the experiences that I'm going to encounter, what's the main event?

You can't prioritize your day if you don't see everything in your day. That's why I practice big-picture thinking in the morning. I have to pick out my main event early, because whatever it is, that's where I had better be at my best. I'm human, and I don't always hit the ball out of the park. Sometimes I don't hit the ball at all. But at the main event, I had better hit a homerun. Big-picture thinking helps me achieve that goal.At the end of the day, I spend another five to 10 minutes doing what I refer to as "reflective thinking." I go to my thinking chair and spend time reviewing my whole day. I ask myself questions such as, "Who did I see today? How did I add value to those people? What lessons did I learn?" Reflective thinking doesn't take long, but it's an incredibly valuable exercise because it turns experience into insight.

Can you imagine what would happen in your life if you practiced big-picture and reflective thinking? You would stop wasting time on things that don't really matter, which would give you more energy for the really important activities. You would be more organized and efficient. You would experience less stress. Most importantly, you would also take more away from each day that would enable you to lead better the next day.The best way to start this process is to designate a specific place to think. It doesn't matter if your "thinking chair" is in your den at home or your office at work. It just has to be a spot where you can do nothing but think for a few moments twice a day.

The bottom line is this: If you find a place to think your thoughts, you'll have more thoughts. If you find a place to shape your thoughts, you will have better thoughts. And if you find a place to stretch your thoughts, you will have bigger thoughts.All this, from just three minutes in the morning and five to ten minutes at night. As you can see, the results far outweigh the time investment.




Sunday, April 17, 2005

Prepare Your Code for Globalization

Many of us in software feel pretty smug right now: We "made it." The crash came, the jobs left, but we survived. We persevered. We studied, we worked, we applied ourselves, and we made ourselves worth employing in a down market.

Well, it's going to get worse. I will explain. Most of us realize that the economy moves in cycles: Things change, new jobs are created and old jobs become obsolete. The 1930's created many jobs in the automotive industry, but who used those machines? In the midwest, tractors replaced the horse and plow, and enabled a single person to do the work of dozens. Of course, this made it uneconomical to be a farmer.

By the 1970's, globalization came to the auto industry. Instead of the big three, competition came from Japan, Korea, and Germany. Many Americans felt pride, even loyalty to their country; Yet, when the import cars came with the same quality for fifty cents on the dollar, American money went overseas, and the jobs went with it.

Right now, globalization is hitting the office furniture industry. The American companies that are doing well have stopped building themselves: They are importing components and doing assembly work, or have moved plants to Mexico, India, or other areas with cheaper labor and materials.

Globalization is coming to technology.
This comes as no surprise to many people: The largest company in India is TaTa Consulting, which has been offering trained software consultants for rock-bottom prices for years. Twenty years ago, author Ed Yourdon wrote Decline and Fall of the American Programmer. After ten years and major changes in the economy, Yourdon wrote Rise & Resurrection of the American Programmer. You see, communicating across continents and time zones is hard. Then came the internet. Yourdon wasn't wrong...just too early.

Right now, today, it's possible to write up a specification for a piece of software and send it off-shore for far less than it would cost to develop in America. In this age of tightened belts all around, why would anyone buy American software when they can buy it from someone else for one tenth the price?

Of course, this article is titled "Prepare your code for globalization". And so, I intend to provide strategies that can keep you employed and earning more and more even if your competition charges less and less.

Keep in mind the reality; Businesses want to reduce cost and risk while increasing revenue. To succeed as a software developer, don't try to sell working software for less money than others; instead, reduce cost, reduce risk or increase revenue for those companies. I will discuss a few ways to do these things, and do them well.

1) Provide Guarantees.
So the other person provides a lower hourly cost. So what? Does that mean that the total cost is going to be less? Most people that deal with software contractors know that an estimate is rarely worth the paper it's printed on. That's why fixed-price and fixed-date contracts are so appealing to customers: It moves the risk from the shoulders of the customer to the selling organization. As long as the buying organization is certain to make money, hourly rates won't matter. (How do you compare $6/hour and "We think it'll take about six months" to "$10,000 and it will be done in three months." How about to "I'll take 30% of gross revenues. If you don't make a dime, I don't make a dime...and this will enocourage me to make it good enough to re-sell")

2) Analyze the business and provide a better solution
Joel Spolsky once wrote that "Customers Don't Know What They Want. Stop Expecting Customers to Know What They Want." In other words, the attitude of "Just give me the requirements" fails because it has the customer solving the problem; the software developer becomes just a glorified technical writer that knows how to write in the language of a machine.

3) Dramatically decrease the defect rate
Are people willing to pay for quality in software? Sadly, generally, the answer is no. Quality in software is hard to measure; unlike automobiles, there are usually no crash or endurance tests to compare against, especially for custom software. Yet we all know that plumbers, electricians, and roofers with a reputation for quality have more work orders than they know what to do with. Producing software with less defects, that is usable, that does what the customer expects will net a major competitive advantage for years to come.

4) Create well-documented, maintainable code
Despite all the jokes about job security, companies want well-documented, easy-to-understand and easy-to-change systems. This allows them to reduce risk, and, as we've previously discussed, reducing risk has tangible, measurable value to a company. The great thing about increasing the value of what you sell is that you can now charge more for it.

5) Provide better feedback
If you prioritize every feature, you can work on the most important features first. A series of small releases gives the customer the most important features first, and the opportunity to provide feedback. This is not a new idea; It is one of the core ideals of the Extreme Programming model, and it's an excellent way to give the customer more while costing you less. (Think about this: Most large projects run late and over budget. Many small projects do not. Instead of "biting off more than we can chew" next time, why not refuse to run a large project and instead run a series of small projects?)

6) Show the customer how you will make them money or allow them to cut costs.
This one is a no-brainer. It's easy to charge more for your services and still win the bid if you are selling something fundamentally different: This is why McDonald's franchises sell for more than Johnny Pizza Time franchises. Imagine the two sales pitches:

Johnny's: "Hey, for $10,000 and 3% of your sales revenue, I'll let you use my name, my sign, my recipies, my suppliers for food, cups, plates - the works!"

McDonald's: "For $1,000,000 and 8% of your sales revenue, we'll give you everything Jerry does - plus throw in a lease on a furnished building in residential area X. We'll promise no McDonald's competition (except the ones you own) in a 50-mile radius of your store. We'll provide management training for your people. In fact, here's a breakdown of our 200 stores in areas with a similar population to X, and their sales compared to expenses for the first five years of business. As you can see, since 1995, only 10 of those stores failed to be profitable within three years, and they were all profitable within five years."

Conclusions
From the last example, you can see that McDonald's and Johnny's are selling two fundamentally different things. They both seem to "solve" the same problem: "I want to own a fast-food business." McDonald's chooses not to compete on price; Instead, they compete on delivered results.

Most banks compete on delivered results for investment. While they may occasionally advertise that they have low or no minimum balance, it is far more common to hear about a low rate for a loan or a high rate for an investment. If we are to survive the coming bust, we must Promise and Deliver Results. These results must substantially differentiate us from other, cheaper competition.

If you try to build a house and base every decision on cost, you will probably get what you deserve. Most people know this, and factor other things into the decision. As the software industry matures, we must learn to provide and market those "other things." In order to survive, we must stop being glorified technical writers and become businessmen...and the need for good businessmen is not decreasing, but instead it is constantly increasing.

How Career Imprinting Shapes Leaders

In my early years as a developer, I was privileged to work on a project managed by Frank Stepic, now head of engineering division at GE Aircraft Engines. He was a walking example of much of what I now think of as enlightened management. One chilly day, I dragged myself out of a sickbed to pull together our shaky system for a user demo. Frank came in and found me propped up at the console. He disappeared and came back a few minutes later with a container of soup. After he'd poured it into me and buoyed up my spirits, I asked him how he found time for such things with all the management work he had to do. He gave me his patented grin and said, "Rodolfo, this is management."

Frank knew what all good instinctive managers know: The manager's function is not to make people work, but to make it possible for people to work. - PeopleWare, DeMarco/Lister, Pg. 34.

Having worked with him, imprinted in me his leadership style. There is a good book on this subject that I recommend. usually when reading it, you can discover how much people in the past have influenced you in how you behave, usually more than you think. Here is an interview with its author and here is a link to amazon where you can get a copy of it.




Wednesday, April 13, 2005

How to start on blogs

Many people have e-mailed me to ask, “How can I read blogs more easily?” Perhaps more importantly, “Why should I read blogs at all?” I started to write on this topic myself and stumbled on an excellent article by Stephen O’Grady over at tecosystems. He says,

The purpose of this post is to give the many people who still haven’t gotten into blogs—i.e. not my regular readers—a simple, step-by-step example of how to dip a toe in the blogging waters.

The article is entitled How to Get into Blogs 101. It is definitely worth a read if you are interested. One of the most helpful parts of his post is how to set up a blog reader.




Tuesday, April 12, 2005

Are open source developers rock stars?

When I was a kid, all I wanted to be was a rock star. I wanted to play guitar, get up on stage, and have everyone scream while I cranked out some hard rockin' tune. I wanted to see lighters held up in the crowd as I finished my last set - dripping with sweat, completely tired, and no energy left. Leave it all on the stage - that's what I wanted. My friends all felt the same - we talked about it all the time.

Well, that never happened. Instead I went to college and spent more time in the computer center than I did at parties (well, not really...). The only thing I cranked out was code. Later, I got a job writing software and I've been working with computers ever since.

While I still listen to a lot of music and have Gigs of tunes on my iPod, my dreams of being a rock star have faded. I still think about them once in a while, but more than that, I now think about open source. So do a bunch of my friends.

I met a guy at a book store, a while ago. (I hang out in those kind of places now instead of the record shop.) He writes financial applications for a mutual fund company. All he wanted to talk about was JBoss. He'd spent some time working on the JMS implementation but had gotten too busy to continue. He wanted to get back involved as soon as he could. All those people who were building the latest JBoss - he wanted to be one of them.

In his eyes I saw the same stars I used to have. I used to think that way about Axl Rose and Bon Jovi. I wanted to be one of them. When I was younger, I ran out to buy the latest Guns n' Roses album - now I run out to get the latest build of Gentoo or Hula.

Open source developers are the rock stars of the software world. The parallels actually go pretty far. You can say they don't get the money and fame, but I think you're wrong. The average open source developer probably makes more at his or her job than most local musicians make. I've met open source developers who have founded software companies and are doing pretty well financially. As far as fame goes, they may not do quite as well as real rock stars but some do pretty well; Linus Torvalds is fairly famous, but I guess not like Kurt Cobain.

They're also usually the most talented developers. Rock stars get where they are in the music world by being great musicians; open source rock stars get where they are by writing great code.

Naming their projects is a lot like naming their bands. When you hear people talking about Subversion, Ethereal, or Excalibur (all open source projects), it's hard to tell if they mean software projects or rock bands.

A good friend of mine called me once and went on for 30 minutes about how he was submitting a patch to the Jakarta Struts project (a JSP framework from the Apache Software Foundation). His patch would allow you to define validations for one input field based on the value of some other field (e.g., if you fill in a last name, make sure you fill out a first name...). He was totally excited about it and went into all the details of how he built it.

After he was done telling me about it, he was almost out of breath. I reached in my pocket, pulled out a lighter, and stood there holding it lit in the air.

Leave it all on the stage.

How to Blog Safely

Blogs are like personal telephone calls crossed with newspapers. They're the perfect tool for sharing your favorite chocolate mousse recipe with friends--or for upholding the basic tenets of democracy by letting the public know that a corrupt government official has been paying off your boss.

If you blog, there are no guarantees you'll attract a readership of thousands. But at least a few readers will find your blog, and they may be the people you'd least want or expect. These include potential or current employers, coworkers, and professional colleagues; your neighbors; your spouse or partner; your family; and anyone else curious enough to type your name, email address or screen name into Google or Feedster and click a few links.

The point is that anyone can eventually find your blog if your real identity is tied to it in some way. And there may be consequences. Family members may be shocked or upset when they read your uncensored thoughts. A potential boss may think twice about hiring you. But these concerns shouldn't stop you from writing. Instead, they should inspire you to keep your blog private, or accessible only to certain trusted people.

Here we offer a few simple precautions to help you maintain control of your personal privacy so that you can express yourself without facing unjust retaliation. If followed correctly, these protections can save you from embarrassment or just plain weirdness in front of your friends and coworkers.

Blog Anonymously

The best way to blog and still preserve some privacy is to do it anonymously. But being anonymous isn't as easy as you might think.

Let's say you want to start a blog about your terrible work environment but you don't want to risk your boss or colleagues discovering that you're writing about them. You'll want to consider how to anonymize every possible detail about your situation. And you may also want to use one of several technologies that make it hard for anyone to trace the blog back to you.

1. Use a Pseudonym and Don't Give Away Any Identifying Details

When you write about your workplace, be sure not to give away telling details. These include things like where you're located, how many employees there are, and the specific sort of business you do. Even general details can give away a lot. If, for example, you write, "I work at an unnamed weekly newspaper in Seattle," it's clear that you work in one of two places. So be smart. Instead, you might say that you work at a media outlet in a mid-sized city. Obviously, don't use real names or post pictures of yourself. And don't use pseudonyms that sound like the real names they're based on--so, for instance, don't anonymize the name "Annalee" by using the name "Leanne." And remember that almost any kind of personal information can give your identity away--you may be the only one at your workplace with a particular birthday, or with an orange tabby.

Also, if you are concerned about your colleagues finding out about your blog, do not blog while you are at work. Period. You could get in trouble for using company resources like an Internet connection to maintain your blog, and it will be very hard for you to argue that the blog is a work-related activity. It will also be much more difficult for you to hide your blogging from officemates and IT operators who observe traffic over the office network.

2. Use Anonymizing Technologies

There are a number of technical solutions for the blogger who wishes to remain anonymous.

Invisiblog.com is a service that offers anonymous blog hosting for free. You may create a blog there with no real names attached. Even the people who run the service will not have access to your name.

If you are worried that your blog-hosting service may be logging your unique IP address and thus tracking what computer you're blogging from, you can use the anonymous network Tor to edit your blog. Tor routes your Internet traffic through what's called an "overlay network" that hides your IP address. More importantly, Tor makes it difficult for snoops on the Internet to follow the path your data takes and trace it back to you.

For people who want something very user-friendly, Anonymizer.com offers a product called "Anonymous Surfing," which routes your Internet traffic through an anonymizing server and can hide your IP address from the services hosting your blog.

3. Limit Your Audience

Many blogging services, including LiveJournal, allow you to designate individual posts or your entire blog as available only to those who have the password, or to people whom you've designated as friends. If your blog's main goal is to communicate to friends and family, and you want to avoid any collateral damage to your privacy, consider using such a feature. If you host your own blog, you can also set it up to be password-protected, or to be visible only to people looking at it from certain computers.

4. Don't Be Googleable

If you want to exclude most major search engines like Google from including your blog in search results, you can create a special file that tells these search services to ignore your domain. The file is called robots.txt, or a Robots Text File. You can also use it to exclude search engines from gaining access to certain parts of your blog. If you don't know how to do this yourself, you can use the "Robots Text File Generator" tool for free at Web Tool Central .

Blog without Fear

Blogs are getting a lot of attention these days. You can no longer safely assume that people in your offline life won't find out about your blog, if you ever could. New RSS tools and services mean that it's even easier than ever search and aggregate blog entries. As long as you blog anonymously and in a work-safe way, what you say online is far less likely to come back to hurt you.

Resources

C|Net's guide to workplace blogging: http://news.com.com/FAQ+Blogging+on+the+job/2100-1030_3-5597010.html?tag=nefd.ac

How Tor works: http://tor.eff.org/overview.html

Anonymizer's Anonymous Surfing: http://www.anonymizer.com/anonymizer2005/1.5/

A list of fired bloggers: http://morphemetales.blogspot.com/2004/12/statistics-on-fired-bloggers.html

The Bloggers' Rights Blog: http://rights.journalspace.com/




Monday, April 11, 2005

A history of free and open source

From GrokLaw:

Historian Peter H. Salus is writing "A History of Free and Open Source". We thought that, with ADTI back with its Grim Fairy Tales, it would be useful to tell the FOSS story truthfully and in a scholarly way, so readers now and historians in the future can rely on the facts. Here's the first installment, the Introduction, and I know you will enjoy it. Look for the next episode on the 6th or 7th of April and every Wednesday or Thursday after that.

Read more...

iPod's social impact

If you're still hungry for more riveting news on the social impact of the iPod, see NYTimes article about this (free registration, etc, etc), it turns out iTunes playlists are more about bling and less about revealing your true self! Who knew? This week the Georgia Institute of Technology and the Palo Alto Research Center released an anthropological study revealing that when co-workers share playlists on office networks, they're more concerned with the image a cubicle-mate might draw from seeing, say, N'Sync or MC Hammer next to your name than the fuzzy feeling they might get from a tearful Celine Dion ballad you've given them access to.

Since I became the owner of a U2 Special Edition a couple of months ago, I've been grappling with the sociological effects it's having on me. I live in Monterrey, Mexico and one of my favorite aspects of living in this city is staying constantly engaged with my surroundings -- a big part of that is all that's pulsating around you. When you block out sound, sure you have the privelege of your own personal soundtrack, but you drown out all the city's noise and character that makes it a vibrant place to live.

How do you think this has impacted random exchanges, homeless donations, dating? Have you experienced a similar ambivalence? Or noticed other sociological effects of your white-horned friend? Have you come across any related studies on this?




Saturday, April 09, 2005

Open Source Software Search Engines

Lately I was thinking about all the open source code which is available around the world. I was asking myself, how it would be possible to search through all this code and to use it for studies and analysis.

Of course, this is not a new idea and I was surprised how many tools try to address this topic in one or another way...

Source Code:
  • http://www.koders.com/



    From the site: Koders.com is the leading search engine for open source code. Our source code optimized search engine provides developers with an easy-to-use interface to search for source code examples and discover new open source projects which can be leveraged in their applications.

  • http://www.jexamples.com/



    From the site: We analyze the source code of production Java open source projects such as Ant, Tomcat and Batik and load that analysis into a database designed for easy searching. You enter the name of a Java Class.method you want to see example invocations of and click Search.

  • http://archive.devx.com/sourcebank/



    From the site: DevX's Sourcebank is a directory of links to source code and script posted around the Web. Use the Search option to find terms within the source code. To cast the widest net, use the search with All Types selected. Or, you can browse through a subset of the code by categories (below). First, select a filter, such as C or Java, by clicking on one of the square buttons and then choose one of the categories (such as Mathematics) from within that filter.

  • http://gonzui.sourceforge.net/

    From the site: gonzui is a source code search engine for accelerating open source software development. In the open source software development, programmers frequently refer to source codes written by others. Our goal is to help programmers develop programs effectively by creating a source code search engine that covers vast quantities of open source codes available on the Internet.
Components, Libraries:
  • http://www.codezoo.net/



    From the site: CodeZoo exists to help you find high-quality, freely available, reusable components, getting you past the repetitive parts of coding, and onto the rest and the best of your projects. It’s a fast-forward button for your compiler.

  • http://www.jarhoo.com/



    From the site: Searches for jar files or fully qualified java class names usually performed under 2 seconds. Package or non-qualified class name searches may take around 10 seconds
Javadoc:
  • http://www.jdocs.com/



    From the site: JDocs is a comprehensive online resource for Java API documentation. All the javadocs for a variety of popular packages are loaded into our db-driven system, and users can contribute their own notes to virtually any class, field, method. In short, JDocs provides a knowledge base defined around the major Java api's themselves, so you can find the information you're looking for right where it should be... in the documentation!

  • http://javadocs.org/

    From the site: You can search from the url, eg: javadocs.org/string

  • http://ashkelon.sourceforge.net/



    From the site: ashkelon is an open source project. It is a Java API documentation tool designed for Java developers. Its goals are the same as the goals of the well-known javadoc tool that comes with J2SE, whose user interface most java developers are quite familiar with.

The Language of Freedom

Open source licenses promise to everyone what many in the community refer to as software freedom. The terminology of freedom is emotionally satisfying, but it has proven to be very confusing.
Freedom is an important subject in law school. Constitutional law courses address such topics as the free speech clause of the First Amendment to the U.S. Constitution. But freedom seldom comes up as a topic in classes devoted to business issues such as contract or tort law, or software licensing. Law school courses on intellectual property deal with copyright and patent, but they don’t teach about freedom, referring instead to the rights of the owners of those legal monopolies.
As a result, there is no easy conceptual basis for integrating the language of freedom into the legal language of software licenses. For example, where the word free is currently used in software licensing contexts, it usually means zero, as in free of charge or free of defects.

Neither of these meanings is intended by open source licenses.

Not that software freedom isn’t definable. The Free Software Foundation lists four essential kinds of software freedom:

1. The freedom to run the software for any purpose
2. The freedom to study how the software works and to adapt it to your needs
3. The freedom to redistribute copies of the software
4. The freedom to improve the software and distribute your improvements to the public

That list, it turns out, can be satisfied by many different software licenses. Both the GPL and the BSD licenses, the earliest open source examples from the late 1980s, ensure those four kinds of software freedom, although they do it in vastly different ways.
Proprietary software vendors love the software freedom provided by the BSD license, but some of them hate and fear the software freedom guaranteed by the GPL. So once again, the concept of freedom by itself is only marginally helpful to understanding open source licensing.

Open Source Use Survey

The EU-funded FLOSSpols project is carrying out a followup survey of Open Source / Free Software developers worldwide.

Rishab Aiyer Ghosh, the programme leader and author of the findings paper from the previous report, writes about the survey:

The EU-funded FLOSSPOLS project is carrying out a survey of developers worldwide. This is a follow-up to the original FLOSS (Free/Libre/Open Source Software) survey in 2002, which was one of the first and most comprehensive surveys of developers - who they are, how they work and why they do it. The new survey aims to provide an update, include new developers, and answer some of the questions that were raised by the first one. In particular, how do developer communities help in learning skills and generating employment, and why is the level of participation by women so low?

According to Danese Cooper, the self-proclaimed Open Source Diva, who recently left Sun to join Intel, Ghosh will also be joining her and other notables on the new OSI board.

On a sidenote, this jumble of acronyms (FLOSS, FOSS, OSS/FS) is getting out of hand; if only people could be persuaded to settle on a single, easy to pronounce, collective name for Open Source and Free Software... Maybe we should just call it Commons Software, and hope that it doesn't give rise to more "Open Source is communist" trolling.




Friday, April 08, 2005

On the way to big apps

The open-source Linux operating system now runs on about a quarter of all computer servers. That's remarkable penetration in a very short time. Does it mean we'll start seeing Linux desktop PCs catching on, as many people in the tech world keep hoping? Maybe that will happen in China and Brazil, or in low-cost environments like call centers.

But the question of how much market share Linux on desktops will gain over the next few years isn't the one to be asking right now and misses the more dramatic shift in the software business. Conquering the desktop doesn't really matter anymore. Most of the really interesting software these days runs on central servers. We access it via our PCs through the Internet or a corporate network. And on those servers is a wide range of open-source software applications that are making impressive gains. There is Linux, of course, which is already a slam dunk. But on top of Linux, open-source middleware like the MySQL database and the JBoss Web application server are beginning to get some traction. MySQL is the database that powers big chunks of Google, Travelocity, and Yahoo. JBoss has come out of nowhere to match BEA and IBM in the Web application server market. According to one survey, JBoss actually surpassed both IBM and BEA late last year in the sheer number of Web apps deployed. All the pieces are now in place for open-source applications, especially enterprise applications, to emerge.

There already exist, for instance, open-source versions of customer-relationship management software (SugarCRM), enterprise-resource planning software (Compiere), and clinical-information management software for hospitals (Medsphere's Vista), among others. These budding open-source enterprise applications -- and there are lots more in the works -- are aimed right at the heart of traditional enterprise software vendors such as Oracle and SAP. "The enterprise software model is broken," argues Medsphere CEO Larry Augustin, who in a former life was the founder of VA Linux. Enterprise software is heavy, ugly, and expensive. It requires a long sales cycle, and even longer to install, and it's out of reach for most small and medium-size businesses. Rather than being used to improve products, an astounding three-quarters of new license revenue in the enterprise software industry is instead plowed right back into sales and marketing. That means, as Augustin likes to point out, that the business model of a traditional enterprise software company is effectively to charge customers a ton of money to convince those same customers that they need the software in the first place.

The advent of open-source enterprise software promises to turn that business model on its head. Since the software is free, there are no huge, up-front licensing fees required before a customer can try it. (Open-source software companies like Medsphere and SugarCRM charge instead for ongoing maintenance and support.) Consequently, open-source software companies don't need to spend as much on sales and marketing. In fact, they tend to spend hardly anything at all on it. Customers can try the software before they buy it, and the hundreds or thousands of outside developers who contribute code to the software are also great advocates for the various products. "That community is our sales and marketing," Augustin says. There are other benefits to going the open-source route. Any business can customize the software to its own needs, the software is inherently more secure, since holes can be plugged by any programmer as soon as they're noticed, and it is not dependent on any one company being around in the future to keep supporting it. The biggest potential advantage, though, is that enterprise-class software can now be used by small and medium-size businesses that previously could not afford it. This underserved market is the last great growth opportunity for the enterprise software industry. "With zero acquisition costs, someone with only a few hundred employees can now take advantage of something that was only available to large enterprises," says Peter Kronowitt, a strategic planner and resident Linux expert at Intel. Adds Augustin, "Some people say open-source is a destroyer of markets, but lower cost means broader market availability." The trick for open-source software companies will be not only to win over those smaller customers but also to convert them into paying customers by upselling them maintenance and support contracts.

The open-source upstarts will also need to compete with other low-cost alternatives to enterprise software that do not rely on open-source. On-demand software utilities like Salesforce.com are playing on the same weaknesses of the old-school enterprise software players by offering competing software for an affordable subscription. The only thing for certain is that as open-source moves up the software stack -- from the operating system to middleware to applications -- proprietary software vendors of all stripes will need to lower their prices or offer something new and wonderful and not yet available for free. Can you imagine?




Tuesday, April 05, 2005

Open source books on sale

Bookpool.com has a nice outlet sale on Open Source book titles. Check it out!

How much money in Open Source for 2004?

Only in US, Over $200 million dollars if you count the 'big ones' (MySQL, Red Hat, JBoss, etc) That's a lot of money, and only the tip of the iceberg.

Money is flowing into open source startups at a furious pace, which makes me prompt the question:

Why aren't you starting one? (I am...)

I will dedicate myself to research how much money is invested in Mexico for these matters. But I can foresee it will be a daunting task since information in Mexico does not flow as easily as in US.
Does anyone have a good source for this?

On technology and culture

This is my first post. Although I am located and am a Mexican citizen, it is in my interest to keep this blog readable for the most audience and so, I will post most of my comments in English.

Please share any comments that you may have.