01/24 2014
Friday afternoon office. at Grove Spot – View on Path.

Friday afternoon office. at Grove Spot – View on Path.

01/20 2014
panthercoffee:

This is an incredibly happy moment for us at Panther Coffee as we celebrate not one but three prizes and distinct recognition of our coffee, our work and the great work of our friends, partners and co-workers all along the long and complex coffee chain.

This week Panther received not one but two Good Food Awards in San Francisco: One for Panther Coffee Ethiopia Chelba and another for Panther Coffee Kailash

At the same time in Durham,NC representing Panther Coffee Camila Ramos wins the first place at the Big Eastern SE Barista Competition!
You can watch her winning presentation online athttp://new.livestream.com/SpecialtyCoffeeAssociationOfAmerica/BigEasternEvent/videos/39935696

A very special thank you and huge shout out to Maximo Ramos - producer of Kailash in Nicaragua and his whole team both at the farm and at Virmax.
Watch Dom Maximo telling us more about Kailash here:http://www.panthercoffee.com/post/73484366343/a-short-interview-with-don-maximo-ramos-producer

And many thanks to the whole Panther team and customers for your support and all the intense cheering and love this week. We ❤️ you

panthercoffee:

This is an incredibly happy moment for us at Panther Coffee as we celebrate not one but three prizes and distinct recognition of our coffee, our work and the great work of our friends, partners and co-workers all along the long and complex coffee chain.

This week Panther received not one but two Good Food Awards in San Francisco: One for Panther Coffee Ethiopia Chelba and another for Panther Coffee Kailash

At the same time in Durham,NC representing Panther Coffee Camila Ramos wins the first place at the Big Eastern SE Barista Competition!
You can watch her winning presentation online at
http://new.livestream.com/SpecialtyCoffeeAssociationOfAmerica/BigEasternEvent/videos/39935696

A very special thank you and huge shout out to Maximo Ramos - producer of Kailash in Nicaragua and his whole team both at the farm and at Virmax.
Watch Dom Maximo telling us more about Kailash here:
http://www.panthercoffee.com/post/73484366343/a-short-interview-with-don-maximo-ramos-producer

And many thanks to the whole Panther team and customers for your support and all the intense cheering and love this week. We ❤️ you

01/12 2014
– View on Path.

– View on Path.

12/28 2013
I used Prolog in a comparative languages course. The biggest program we did was a map-coloring one (color a map with only four colors so that no bordering items have the same color, given a mapping of things that border each other). I say biggest because we were given the most time with it. I started out like most people in my class trying to hack the language into letting me code a stinking algorithm to color a stinking map. Then I wrote a test function to check if the map was colored and, in a flash of prolog, realized that that was really all I needed to code.

http://c2.com/cgi/wiki?PrologLanguage (via programmingisterrible)

09/25 2013

Some Riak Secondary Index Notes

Riak Secondary Indexes (2i) are pretty nice. As of Riak 1.4, results come back in order, can be paginated, and streaming works pretty well with the Ruby client (with Protocol Buffers and Excon HTTP). There’s a few notes and gotchas though.

For these examples, you have a bucket with a hundred keys in it, numbered 1-100, and indexed by that number.

Secondary Indexes Aren’t Consistent

2i makes the same consistency guarantees as Riak itself. If you’ve queried for 0-200, which matches all 100 records, you might not get a hundred records back. If, by the time the index scan hits 50, you’ve deleted, records 67, 15, and 86, you might not get 67 or 86, and depending on how fast you deleted it, you might not have got 15 either. If somebody adds records, they may or may not show up in the results either.

Pagination is nice but not as complete as SQL

Riak 1.4 added pagination for secondary indexes. It’s not as nice as traditional pagination as seen in will_paginate, which has the luxury of making SQL queries:

SELECT COUNT(*)
    FROM posts
    WHERE
        published_at IS NOT NULL AND
        user_id = 12345;

SELECT *
    FROM posts
    WHERE
        published_at IS NOT NULL AND
        user_id = 12345
    LIMIT 5
    OFFSET 30;

Riak 2i has no equivalent of the former short of fetching all the keys, and if it did implement it, you’d be better off just querying for all the posts in range and paginating in-client.

i = Riak::SecondaryIndex.new(
    posts_bucket,
    'user_publish_bin',
    ('12345_0000000000'..'12345_1380074400')
    )

# these calculations almost certainly contain bugs
total_pages = i.keys.length / 5
offset = params[:page] * 5
current_page = pages[offset..(offset + 5)]

posts = posts_bucket.get_many current_page

With that said, the pagination features are useful if you don’t mind jamming client state into links: the continuation slug from pagination means that users that stop and read posts won’t see the same post at the top of the next page when you make a new one. If you can get away with “Previous,” “Next,” and maybe a list of previous pages, pagination is right up your alley.

Streaming is Useful

Riak 1.4 also brings streaming to 2i; you can get little lumps of keys delivered to your client as Riak sorts them, instead of all at once. If you’re feeding these keys into a processing queue to be handled elsewhere, this is nice, can save you some memory (and therefore GC pauses), and isn’t even difficult.

i = Riak::SecondaryIndex.new buck, 'index_int', 0..50

i.keys {|k| puts k.inspect}
# ["0", "1", "2", "3", "4", "5", "6"]
# ["7", "8", "9", "10", … "49"]
# []

You’ll notice in this case that the stream is chunky, and that the chunks aren’t evenly sized. What happened is that the first few results became available right away, and by the time that message was out the door, all the rest of the results were ready to go.

The only caveat is that it’s not a “get out of consistency jail free” card.

How It Works

Secondary indexes are stored much like regular data, but instead of key/value pairs, they’re index_key/key pairs. A range query in a vnode involves seeking through leveldb to where the start of the range should be, and reading all the entries until it passes the end of the range. The entries read by each vnode are then merge-sorted by the index state machine, and finally returned to the client.

Recommended Reading

1 of 64