Why your company should have a very permissive open source policy

Having a permissive open source policy is important if a company wants to recruit truly stellar programmers. Or put another way: great programmers will be less inclined to work for you if you have a restrictive open source policy because being involved in open source projects is one of the best ways for a programmer to increase his market value.

Traditional methods for measuring programming ability are ineffective

The job market for programmers, especially the top programmers, is notoriously inefficient. This inefficiency is due to employers lacking good methods for evaluating programmers. The standard techniques used to evaluate programmers -- resumes, on-the-spot coding questions, take-home projects -- are at best crude approximations of a programmer's ability, and none of them will be indicators of the truly visionary people. Sure, there are other indicators like being involved in successful companies or having past impressive titles, but those are still indirect indicators of programming ability.

If you're a programmer, this difficulty in measuring your skill means its really difficult to make a potential employer's perceived value of you match your actual value. Top programmers aren't differentiated from the next tier of programmers and get badly mispriced in the market. Top programmers need better mechanisms to communicate their value so that they can be priced more fairly in the market.

Click to read more ...


News Feed in 38 lines of code using Cascalog

In this tutorial for Cascalog, we are going to create part of the back-end for a simplified version of a Facebook-like news feed. In doing so we are going to walk through an end-to-end example of running Cascalog on a production cluster. If you're new to Cascalog, you should first look at the introductory tutorials here and here.

The code and sample data for the example presented in this tutorial can be found on Github.

Click to read more ...


Cascalog Presentation at Bay Area Clojure User Group

Here are the slides from my presentation about Cascalog at the Bay Area Clojure User Group last night:


New Cascalog features: outer joins, combiners, sorting, and more

In the first tutorial for Cascalog, I showed off many of Cascalog's powerful features: joins, aggregates, subqueries, custom operations, and more. Since Cascalog's release a couple weeks ago, I've added a number of new features to Cascalog that seriously increase the expressiveness and performance of the language without compromising its simplicity or flexibility.

Click to read more ...


Introducing Cascalog: a Clojure-based query language for Hadoop

I'm very excited to be releasing Cascalog as open-source today. Cascalog is a Clojure-based query language for Hadoop inspired by Datalog.


  • Simple - Functions, filters, and aggregators all use the same syntax. Joins are implicit and natural.
  • Expressive - Logical composition is very powerful, and you can run arbitrary Clojure code in your query with little effort.
  • Interactive - Run queries from the Clojure REPL.
  • Scalable - Cascalog queries run as a series of MapReduce jobs.
  • Query anything - Query HDFS data, database data, and/or local data by making use of Cascading's "Tap" abstraction
  • Careful handling of null values - Null values can make life difficult. Cascalog has a feature called "non-nullable variables" that makes dealing with nulls painless.
  • First class interoperability with Cascading - Operations defined for Cascalog can be used in a Cascading flow and vice-versa
  • First class interoperability with Clojure - Can use regular Clojure functions as operations or filters, and since Cascalog is a Clojure DSL, you can use it in other Clojure code.

Click to read more ...


Fun with equality in Clojure

I ran into some very non-intuitive behavior from Clojure recently. See if you can guess what "foo" is in the following examples:

Example 1:

user=> foo
user=> (= foo 1)
user=> (= [foo 2] [1 2])
user=> (= {foo 2} {1 2})

Example 2:

user=> foo
user=> (= foo false)
user=> (when foo (println "shouldn't print?"))
shouldn't print?

Yikes, huh? Here are the answers:

Example 1: (def foo (Long. "1"))

Example 2: (def foo (Boolean. false))

For example 1, the map equality breaks down because Long and Integer have different hashcodes for the same numeric value. In example 2, Clojure considers anything besides false or nil to be true in a conditional, so that means a false Boolean object will be true in a conditional even though it's equal to "false".

I would definitely consider #1 a bug, as part of the contract of equality is that two equal objects have the same hashcode. #2 is more debatable, but it seems more intuitive that the Boolean object false be considered false in conditionals as well.

You should follow me on Twitter here.


Migrating data from a SQL database to Hadoop

I wrote about the various options available for migrating data from a SQL database to Hadoop, the problems with existing solutions, and a new solution that we open-sourced on the BackType tech blog. The tool we open-sourced is on GitHub here.


Proof that 1 = 0 using a common logical fallacy

Awhile ago I read a post by Daniel Levine that shows a formal proof of x*0 = 0. Here's a reprint of the proof:

  1. y = y (identity axiom)
  2. y - y = 0 (arithmetic)
  3. x*(y - y) = 0 (substitution)
  4. x*y - x*y = 0 (distributive)
  5. x*y = x*y (arithmetic)

The logic of this proof is that since we can reduce x*0 = 0 to the identity axiom, x*0 = 0 is true. Unfortunately, this is not logically sound.

Click to read more ...


Thrift + Graphs = Strong, flexible schemas on Hadoop

There are a lot of misconceptions about what Hadoop is useful for and what kind of data you can put in it. A lot of people think that Hadoop is meant for unstructured data like log files. While Hadoop is great for log files, it's also fantastic for strongly typed, structured data.

In this post I'll discuss how you can use a tool like Thrift to store strongly typed data in Hadoop while retaining the flexibility to evolve your schema. We'll look at graph-based schemas and see why they are an ideal fit for many Hadoop-based applications.

Click to read more ...


Follow-up to "The mathematics behind Hadoop-based systems"

In a previous post, I developed an equation modeling the stable runtime of an iterative, batch-oriented workflow. We saw how the equation explained a number of counter-intuitive behaviors of batch-oriented systems. In this post, we will learn how to measure the amount of overhead versus dynamic time in a workflow, which is the first step in applying the theory to optimize a workflow.

Click to read more ...