Image: Arnaud Mesureur
All of our Clojure projects are really a hybrid when the Java dependencies and shims are taken into account.
I use IntelliJ with Cursive Clojure as my IDE. IntelliJ provides excellent Java tooling and Cursive is easy to use, well maintained, with an enthusiastic community and an active author.
Emacs with Cider is another common Clojure setup, but mine has the advantage that when I’m working on a plain Java project it’s easy for me to have a Clojure REPL open acting as a scratch-pad.
Clojure may look foreign at first glance, but once you peer past the parens and prefix-notation you can consider it just an odd variant of Java to begin with. Familiarity with functional idioms and idiosyncrasies come with time, but from day one there are two aspects of Clojure that make it compelling to anyone working on a Java project.
The REPL brings an immediacy to a language, reducing the proximity between the developer and execution to basically naught. Project Kulla will see a REPL ship with Java 9, but until then Clojure provides a perfectly useable tool for playing with and evaluating Java code.
What happens when I exceed a Netty ByteBuf capacity?
Similar questions pop up all the time. I can dive into documentation, or I can play in the REPL:
(.writeBytes (Unpooled/buffer 5 10) (.getBytes "asdasdasdasd"))
IndexOutOfBoundsException writerIndex(0) + minWritableBytes(12) exceeds maxCapacity(10):
UnpooledHeapByteBuf(ridx: 0, widx: 0, cap: 5/10)
io.netty.buffer.AbstractByteBuf.ensureWritable (AbstractByteBuf.java:242)
The syntax is different, but the example given is simply Java methods being called on a Java object.
I will continue to use the Clojure REPL once Java 9 ships, even when I’m working exclusively in Java, why?
Clojure is expressive. I was introduced to Clojure by my friend and colleague Paul Carey at UBS in London, I recall he was so enthused about the ‘expressive’ power of the language. I didn’t care, I simply didn’t understand the benefit. Java is fine. 640k ought to be enough for anybody.
You’ve likely heard the Alan Perlis quote:
“better to have a hundred functions operate on one data structure than ten functions on ten data structures”
Some things are learned by doing, in my case working with Clojure has left me an appreciation of the succinct, and of the core Clojure functions that operate on sequences.
How does Kafka partition data between brokers from a dozen hosts?
One datacenter has a dozen MTA hosts, they send logs to Kafka keyed by host-name. The default partitioning strategy is simply hash of key % number of partitions
, so assuming the hash function is fine I should get a fairly even distribution, but what if I want to quickly sanity check that?
Using Java 7 I might write a unit test that loops through my list of host-names, partitions each, aggregates the results, and then prints the answer. That’s a minor perversion of the test runner, there’s no real value in that test so I will delete it as soon as I’ve evaluated it. Is it worth the bother? With Java 8 I might use a more functional approach, how verbose would this task be if I was using the Java 9 REPL?
Using Clojure the question is brief and the answer immediate:
(let [partitioner (DefaultPartitioner. (VerifiableProperties.))]
(group-by #(.partition partitioner % 3)
["mta1101" "mta1102" "mta1103" "mta1201" "mta1202" "mta1203"
"mta3101" "mta3102" "mta3103" "mta3201" "mta3202" "mta3203"]))
=>
{2 ["mta1101" "mta1203" "mta3102" "mta3201"],
0 ["mta1102" "mta1201" "mta3103" "mta3202"],
1 ["mta1103" "mta1202" "mta3101" "mta3203"]}
The next time you open up Intellij, download Cursive and open a REPL.