Saturday, February 27, 2016

Yes. Yes. Yes. Yes. Yes.

5 Things That Will Make Blood and Wine Great

Although originally given a tentative release for the first quarter of 2016, like all good ambiguously dated things Witcher 3’s second and more sizable expansion, Blood and Wine, has been pushed back to the first half of 2016. Which does technically still include the first quarter, but come on. They wouldn’t be giving themselves those extra three months of leeway if they didn’t think they needed it.

So while we may have to wait a bit longer for our next and seemingly final foray in to the life of Geralt of Rivia, we’ve still got enough nuggets of information to dive in, Scrooge McDuck style, and swim on the oceans of speculation until CD Projekt Red lock down dates, deadlines, and details for us all. To that end, and knowing what we know, here are five things that could be done in Blood and Wine that would make it great.

Must. Find. Time. To. Complete. Pillars. Of. Eternity. And. Firewatch. Soon.

Alameda Point's Spirits Alley featured on Roads & Kingdoms

A little local coverage from one of my favorite websites: The Point of Diminishing Returns for Adult-Beverage Enthusiasts

Once the fog deepens, nearly covering the western span of the Bay Bridge, the deserted base looks more like the abandoned movie set it was than the burgeoning home of alcohol innovation it has become.

Wednesday, February 24, 2016

Stuff I'm reading, late February edition

The rain is over. Bummer. It was good while it lasted.

  • Disks for Data Centers: White paper for FAST 2016
    We believe it is time to develop, jointly with the industry and academia, a new line of disks that are specifically designed for large scale data centers and services.
  • Not-quite-so-broken TLS: lessons in re-engineering a security protocol specification and implementation
    On the surface this is a paper about a TLS implementation, but the really interesting story to me is the attempt to ‘do it right,’ and the techniques and considerations involved in that process. The IT landscape is littered with bug-ridden and vulnerable software – surely we can do better? And if we’re going to make a serious attempt at that, where better than something like a TLS stack – because bugs and vulnerabilities there also expose everything that relies on it – i.e. pretty much the whole of the internet.
  • A Critique of ANSI SQL Isolation Levels
    The ANSI SQL isolation levels were originally defined in prose, in terms of three specific anomalies that they were designed to prevent. Unsurprisingly, it turns out that those original definitions are somewhat ambiguous and open to both a strict and a broad interpretation. It also turns out that they are not sufficient, since there is one even more basic anomaly not mentioned in the standard that needs to be prevented in order to be able to implement rollback and recovery. Looking even more deeply, the paper uncovers eight different phenomena (anomalies) that can occur, and six different isolation levels.
  • Google's Transition from Single Datacenter, to Failover, to a Native Multihomed Architecture
    The main idea of the paper is that the typical failover architecture used when moving from a single datacenter to multiple datacenters doesn’t work well in practice. What does work, where work means using fewer resources while providing high availability and consistency, is a natively multihomed architecture
  • The Deactivation of the American Worker
    The job terminations, like the bulk of the media outlet’s work, were first experienced by most Gawker employees in digital, rather than physical space. Deleting the accounts was merely the company’s attempt to assert control of its office space, and Slack’s role in the layoffs simply exemplified where work was actually being done; it also serves as an indicator of, for many employees in the coming years, where it will end.
  • The Absurdity of What Investors See Each Day
    What happened is when NYSE first allowed [traders] to collocate in the [same building], people started to get into pissing matches over the length of their cables. Just to give you an idea, a foot of cable equates to one nanosecond, which is a billionth of a second. People were getting into pissing matches over a billionth of a second.


    NYSE measured the distance to the furthest cabinet, which is where people put their servers. It was 185 yards. So they gave every [high-frequency trader] a cable of 185 yards.

    Then, traders who were previously closer to the [exchange server] asked to move to the farthest end of the building. Why? Because when a cable is coiled up, there's a light dispersion that is slightly greater than when the cable is straight.

  • President Obama Announces His Intent to Nominate Carla D. Hayden as Librarian of Congress
    She began her career with the Chicago Public Library as the Young Adult Services Coordinator from 1979 to 1982 and as a Library Associate and Children’s Librarian from 1973 to 1979.

Sunday, February 21, 2016

The story of CVE-2015-7547

I don't believe there is a Pulitzer Prize for software.

But if there was such a prize, it should be given to the teams from RedHat and from Google who worked on CVE-2015-7547.

Let's start the roundup by looking a bit at Dan Kaminsky's essay: A Skeleton Key of Unknown Strength

The glibc DNS bug (CVE-2015-7547) is unusually bad. Even Shellshock and Heartbleed tended to affect things we knew were on the network and knew we had to defend. This affects a universally used library (glibc) at a universally used protocol (DNS). Generic tools that we didn’t even know had network surface (sudo) are thus exposed, as is software written in programming languages designed explicitly to be safe.

Kaminsky goes on to give a high-level summary of how the bug allows attacks:

Somewhat simplified, the attacks depend on:.
  • A buffer being filled with about 2048 bytes of data from a DNS response
  • The stub retrying, for whatever reason
  • Two responses ultimately getting stacked into the same buffer, with over 2048 bytes from the wire
The flaw is linked to the fact that the stack has two outstanding requests at the same time – one for IPv4 addresses, and one for IPv6 addresses. Furthermore DNS can operate over both UDP and TCP, with the ability to upgrade from the former to the latter. There is error handling in DNS, but most errors and retries are handled by the caching resolver, not the stub. That means any weird errors just cause the (safer, more properly written) middlebox to handle the complexity, reducing degrees of freedom for hitting glibc.

An interesting thing about this bug is that it was more-or-less concurrently studied by two separate security analysis teams. Here's how the Google team summarize the issue in their article: CVE-2015-7547: glibc getaddrinfo stack-based buffer overflow

glibc reserves 2048 bytes in the stack through alloca() for the DNS answer at _nss_dns_gethostbyname4_r() for hosting responses to a DNS query.

Later on, at send_dg() and send_vc(), if the response is larger than 2048 bytes, a new buffer is allocated from the heap and all the information (buffer pointer, new buffer size and response size) is updated.

Under certain conditions a mismatch between the stack buffer and the new heap allocation will happen. The final effect is that the stack buffer will be used to store the DNS response, even though the response is larger than the stack buffer and a heap buffer was allocated. This behavior leads to the stack buffer overflow.

The vectors to trigger this buffer overflow are very common and can include ssh, sudo, and curl. We are confident that the exploitation vectors are diverse and widespread; we have not attempted to enumerate these vectors further.

That last paragraph is a doozy.

Still, both of the above articles, although fascinating and informative, pale beside the epic, encyclopedic, exhaustive, and fascinating treatise written by Carlos O'Donell of RedHat and posted to the GNU C Library mailing list: [PATCH] CVE-2015-7547 --- glibc getaddrinfo() stack-based buffer overflow.

O'Donell's explication of the bug is perhaps the greatest debugging/diagnosis/post-mortem write-up of a bug that I think I've ever read.

If you've ever tried to precisely describe a bug, and how it can cause a security vulnerability, you'll know how hard it is to do that both exactly and clearly. Here's how O'Donell does it:

The defect is located in the glibc sources in the following file:

- resolv/res_send.c

as part of the send_dg and send_vc functions which are part of the
__libc_res_nsend (res_nsend) interface which  is used by many of the
higher level interfaces including getaddrinfo (indirectly via the DNS
NSS module.)

One way to trigger the buffer mismanagement is like this:

* Have the target attempt a DNS resolution for a domain you control.
  - Need to get A and AAAA queries.
* First response is 2048 bytes.
  - Fills the alloca buffer entirely with 0 left over.
  - send_dg attemps to reuse the user buffer but can't.
  - New buffer created but due to bug old alloca buffer is used with new
    size of 65535 (size of the malloc'd buffer).
  - Response should be valid.
* Send second response.
  - This response should be flawed in such a way that it forces
    __libc_res_nsend to retry the query. It is sufficient for example to
    pick any of the listed failure modes in the code which return zero.
* Send third response.
  - The third response can contain 2048 bytes of valid response.
  - The remaining 63487 bytes of the response are the attack payload and
    the recvfrom smashes the stack with it.

The flaw happens because when send_dg is retried it restarts the query,
but the second time around the answer buffer points to the alloca'd
buffer but with the wrong size.

O'Donell then proceeds to walk you through the bug, line by line, showing how the code in question proceeds, inexorably, down the path to destruction, until it commits the fatal mistake:

So we allocate a new buffer, set *anssizp to MAXPACKET, but fail to set *ansp to the new buffer, and fail to update *thisanssizp to the new size.
And, therefore:
So now in __libc_res_nsend the first answer buffer has a recorded size of MAXPACKET bytes, but is still the same alloca'd space that is only 2048 bytes long.

The send_dg function exits, and we loop in __libc_res_nsend looking for an answer with the next resolver. The buffers are reused and send_dg is called again and this time it results in `MAXPACKET - 2048` bytes being overflowed from the response directly onto the stack.

There's more, too, and O'Donell takes you through all of it, including several other bugs that were much less severe which they uncovered while tracking this down and studying it using tools like valgrind.

O'Donell's patch is very precise, very clearly explained, very thoroughly studied.

But, as Kaminsky points out in today's follow-up, it's still not clear that we understand the extent of the danger of this bug: I Might Be Afraid Of This Ghost

A few people have privately asked me how this particular flaw compares to last year’s issue, dubbed “Ghost” by its finders at Qualys.


the constraints on CVE-2015-7547 are “IPv6 compatible getaddrinfo”. That ain’t much. The bug doesn’t even care about the payload, only how much is delivered and if it had to retry.

It’s also a much larger malicious payload we get to work with. Ghost was four bytes (not that that’s not enough, but still).

In Ghost’s defense, we know that flaw can traverse caches, requiring far less access for attackers. CVE-2015-7547 is weird enough that we’re just not sure.

It's fascinating that, apparently due to complete coincidence, the teams at Google and at RedHat uncovered this behavior independently. Better, they figured out a way to coordinate their work:

In the course of our investigation, and to our surprise, we learned that the glibc maintainers had previously been alerted of the issue via their bug tracker in July, 2015. (bug). We couldn't immediately tell whether the bug fix was underway, so we worked hard to make sure we understood the issue and then reached out to the glibc maintainers. To our delight, Florian Weimer and Carlos O’Donell of Red Hat had also been studying the bug’s impact, albeit completely independently! Due to the sensitive nature of the issue, the investigation, patch creation, and regression tests performed primarily by Florian and Carlos had continued “off-bug.”

This was an amazing coincidence, and thanks to their hard work and cooperation, we were able to translate both teams’ knowledge into a comprehensive patch and regression test to protect glibc users.

It was very interesting to read these articles, and I'm glad that the various teams took the time to share them, and even more glad that companies like RedHat and Google are continuing to fund work like this, because, in the end, this is how software becomes better, painful though that process might be.

Saturday, February 20, 2016

I am like you. You are like me.

Read this story.

Read it over and over and over.

And smile, and be glad.

Wednesday, February 17, 2016

Blind injection

Obviously, the gravitational wave discovery was extremely, extremely cool.

But what I thought was in a lot of ways much cooler was the technique the team used to build in a process which ensured that they were extremely careful in their analysis, and weren't easily fooled: LIGO-Virgo Blind Injection

The LIGO Scientific Collaboration and the Virgo Collaboration conducted their latest joint observation run (using the LIGO Hanford, LIGO Livingston, Virgo and GEO 600 detectors) from July, 2009 through October 2010, and are jointly searching through the resulting data for gravitational wave signals standing above the detector noise levels. To make sure they get it right, they train and test their search procedures with many simulated signals that are injected into the detectors, or directly into the data streams. The data analysts agreed in advance to a "blind" test: a few carefully-selected members of the collaborations would secretly inject some (zero, one, or maybe more) signals into the data without telling anyone. The secret goes into a "Blind Injection Envelope", to be opened when the searches are complete. Such a "mock data challenge" has the potential to stress-test the full procedure and uncover problems that could not be found in other ways.

It must be really pleasing, in that what-makes-an-engineer-deeply-satisfied sort of way, to open up the Blind Injection Envelope and discover that your analysis was in fact correct.

It isn't a perfect comparison, but I am strongly reminded of the Netflix engineering team's approach to building in fault tolerance and reliability in their systems by intentionally provoking failures: The Netflix Simian Army

Imagine getting a flat tire. Even if you have a spare tire in your trunk, do you know if it is inflated? Do you have the tools to change it? And, most importantly, do you remember how to do it right? One way to make sure you can deal with a flat tire on the freeway, in the rain, in the middle of the night is to poke a hole in your tire once a week in your driveway on a Sunday afternoon and go through the drill of replacing it. This is expensive and time-consuming in the real world, but can be (almost) free and automated in the cloud.

This was our philosophy when we built Chaos Monkey, a tool that randomly disables our production instances to make sure we can survive this common type of failure without any customer impact. The name comes from the idea of unleashing a wild monkey with a weapon in your data center (or cloud region) to randomly shoot down instances and chew through cables -- all the while we continue serving our customers without interruption. By running Chaos Monkey in the middle of a business day, in a carefully monitored environment with engineers standing by to address any problems, we can still learn the lessons about the weaknesses of our system, and build automatic recovery mechanisms to deal with them. So next time an instance fails at 3 am on a Sunday, we won't even notice.

In computing circles, this sort of thing is often gathered under the term Recovery Oriented Computing, but I really like the term "blind injection."

I'm going to remember that, and keep my eye out for places to take advantage of that technique.

Friday, February 12, 2016

Saving the desert

Here's some wonderful news to pick me up after what was, for totally unrelated reasons, an extremely rough week.

  • Volcanic spires and Joshua trees: Obama protects 1.8 million acres in California's desert
    President Obama designated three new national monuments in the California desert Thursday, expanding federal protection to 1.8 million acres of landscapes that have retained their natural beauty despite decades of heavy mining, cattle ranching and off-roading.
  • Photos: Obama Declares 3 New National Monuments In California Desert
    All three areas lie east of Los Angeles. Two of the new monuments — Castle Mountains and Mojave Trails — are near California's border with Nevada.

    And crucially, "the new monuments will link already protected lands, including Joshua Tree National Park, Mojave National Preserve, and fifteen congressionally-designated Wilderness areas, permanently protecting key wildlife corridors and providing plants and animals with the space and elevation range that they will need in order to adapt to the impacts of climate change," the release says.

  • With 3 new monuments, Obama creates world’s second-largest desert preserve
    The designations under the 1906 Antiquities Act connect an array of existing protected areas, including Joshua Tree National Park, Mojave National Preserve and 15 wilderness areas, creating a nearly 10 million-acre arid land reserve that is surpassed only by Namibia’s Namib-Naukluft National Park.
  • Obama Just Added Three More National Monuments
    Mojave Trails National Monument

    This is the largest of the newly protected areas and spans 1.6 million acres, over 350,000 of which were already protected. The area includes ancient Native American trading routes, a long stretch of Route 66, and World War II training camps. Natural highlights include the Pisgah Crater lava flows, Marble Mountains Fossil Beds, and the Amboy Crater.

  • Mojave Trails National Monument
    Mojave Trails boasts stunning springs of underground water, like diamonds in the rough, teaming with desert life, and shifting sand dunes that hum in the wind - havens for kit foxes.

    Other national treasures in the proposed monument include:

    • The scenic lava flows of Amboy Crater—North America’s youngest volcano and a National Natural Landmark;
    • The 550 million-year-old trilobite fossil beds of the Marble Mountains;
    • Sleeping Beauty Valley—the last intact valley representing the West Mojave plant ecosystem; and
    • The Cady Mountains—one of the best areas in the Mojave to see bighorn sheep.
  • Land Status
  • Press Release: Historic Designation of New California Desert National Monuments Celebrated by Local Communities
    The Mojave Trails National Monument links the Mojave National Preserve to Joshua Tree National Park and existing Wilderness Areas, and includes vital wildlife habitat, desert vistas and important Native American cultural sites. Sand to Snow offers some of the most biologically diverse habitats in the country, linking the San Gorgonio Wilderness to Joshua Tree National Park and the San Bernardino National Forest. Some of the finest Joshua tree, piƱon pine, and juniper forests in the desert grow in the Castle Mountains National Monument. Given the exceptional historical, ecological, and geological features found in each area – from Route 66 to the Marble Mountains Fossil Beds to desert tortoise and bighorn sheep habitat – these lands are well-deserving of their new national monument status.

    “Along with over 100 historians, archaeologists, and other experts, I enthusiastically welcome the designation of the Mojave Trails, Sand to Snow and Castle Mountains National Monuments,” said Dr. Clifford E. Trafzer, Rupert Costo Chair in American Indian Affairs, University of California, Riverside. “President Obama has taken a step forward to preserve not only the beauty of these lands, but also our shared history. Now these places will be better protected against theft and damage of Native American objects and artifacts. With respect and good stewardship, these public lands are repositories of knowledge, just waiting to be understood.”

This is the land of my childhood (well, my middle-school childhood, anyway). It is as harsh and unforgiving a place as exists on the planet, but my oh my is it a treasure for those who take the time to get to know and appreciate it.

Thank you, Mr. President.

Thursday, February 11, 2016

Stuff I'm reading, President's Day edition

El Nino is apparently over, as the rain has completely stopped. Sigh.

  • Gas leak at Porter Ranch well is stopped -- at least temporarily
    The leaking well is one of 115 injection wells at the 80-year-old, 3,600-acre Aliso Canyon facility, which stores 86 billion cubic feet of gas that serves 11 million people in the Los Angeles basin. Many of those wells are corroded and mechanically damaged, the gas company said.

    Yet it is the only field in a distribution area stretching from Porter Ranch 60 miles south to Santa Ana that can ensure reliability in both winter, when homes and businesses use significant amounts of natural gas for heating, and summer, when gas-fired generators supply power to air conditioners.

    Efforts to kill the well are being conducted under new orders imposed by the Safety and Enforcement Division of the California Public Utilities Commission in consultation with the state Department of Conservation's Division of Oil, Gas and Geothermal Resources.

  • The Princeton Bitcoin textbook is now freely available
    If you’re looking to truly understand how Bitcoin works at a technical level and have a basic familiarity with computer science and programming, this book is for you. Researchers and advanced students will find the book useful as well — starting around Chapter 5, most chapters have novel intellectual contributions.
  • Transitioning from SPDY to HTTP/2
    HTTP/2 is the next-generation protocol for transferring information on the web, improving upon HTTP/1.1 with more features leading to better performance. Since then we've seen huge adoption of HTTP/2 from both web servers and browsers, with most now supporting HTTP/2. Over 25% of resources in Chrome are currently served over HTTP/2, compared to less than 5% over SPDY. Based on such strong adoption, starting on May 15th — the anniversary of the HTTP/2 RFC — Chrome will no longer support SPDY.
  • The Malware Museum
    The Malware Museum is a collection of malware programs, usually viruses, that were distributed in the 1980s and 1990s on home computers. Once they infected a system, they would sometimes show animation or messages that you had been infected. Through the use of emulations, and additionally removing any destructive routines within the viruses, this collection allows you to experience virus infection of decades ago with safety.
  • Planning for Disaster
    I’m not optimistic that, as a group, computer scientists and computing professionals can prevent this disaster from happening: the economic forces driving automation and system integration are too strong. But of course we should try. We also need to think about what we’re going to do if, all of a sudden, a lot of people suddenly expect us to start producing computer systems that actually work, and perhaps hold us accountable when we fail to do so.
  • The Moral Hazard of Complexity-Theoretic Assumptions
    Computational-complexity theory focuses on classifying computational problems according to their inherent difficulty. The theory relies on some fundamental abstractions, including that it is useful to classify algorithms according to their worst-case complexity, as it gives us a universal performance guarantee, and that polynomial functions display moderate growth.
  • Jepsen: RethinkDB 2.2.3 reconfiguration
    I offered the team a more aggressive failure model: we’d dynamically reconfigure the cluster membership during the test. This is a harder problem than consensus with fixed membership: both old and new nodes must gracefully agree on the membership change, ensure that both sets of nodes will agree on any operations performed during the handover, and finally transition to normal consensus on the new set of nodes. The delicate handoff of operations from old nodes to new provides ample opportunities for mistakes.
  • Einstein's gravitational waves 'seen' from black holes
    Expected signals are extremely subtle, and disturb the machines, known as interferometers, by just fractions of the width of an atom.

    But the black hole merger was picked up by two widely separated LIGO facilities in the US.

    The merger radiated three times the mass of the sun in pure gravitational energy.

  • Top 5 Targets of a Gravity Wave Observatory
    an instrument called LIGO (the Laser Interferometer Gravitational-Wave Observatory) had apparently observed a “spectacular” direct gravitational wave signal, minute ripples in the fabric of spacetime created by distant and very massive objects—in this case, the collision and merger of two black holes, one 36 times the mass of our Sun, the other 29 times. Einstein first theorized that gravitational waves should exist in 1915, but evidence for them has been indirect so far. LIGO may be the first instrument to ever see them directly, by measuring the slight contraction and expansion of the distance separating distant mirrors.

    We asked physicists what they’d be most excited to observe with a fully functioning gravitational wave observatory. Here are their top 5 favorite targets.

  • The Resetting of the Startup Industry
    The startup industry may be “resetting,” which doesn’t mean a “crash” but rather just a resetting of valuations, timescales, winners/losers, capital sources and the relative emphasis of growth rates vs. burn rates.
  • Was Black Friday A DiSaaSter Or Simply Reversion To The Mean?
    Last Friday, LinkedIn, Salesforce and Workday lost $18B in market capitalization. To put that in perspective, these three SaaS companies lost more in market cap on Friday than 15 current SaaS leaders are worth…combined.

    How can this be possible? LinkedIn, Salesforce and Workday are growing revenue with sticky customers and they are targeting large addressable markets. Are they now suddenly undesirable companies?

  • Founders – Use Your Down Round To Clean Up Your Cap Table
    I learned this lesson 127 times between 2000 and 2005. I started investing in 1994 and while there was some bumpiness in 1997 and again in 1999, the real pain happened between 2000 and 2005. I watched, participated, and suffered through every type of creative financing as companies were struggling to raise capital in this time frame. I’ve seen every imaginable type of liquidation preference structure, pay-to-play dynamic, preferred return, ratchet, share/option bonus, option repricing, and carveout. I suffered through the next financing after implementing a complex structure, or a sale of the company, or a liquidation. I’ve spent way too much time with lawyers, rights offerings, liquidation waterfalls, and angry/frustrated people who are calculating share ownership by class to see if they can exert pressure on an outcome that they really can’t impact anyway, and certainly haven’t been constructively contributing to.
  • Why Commons Should Not Have Ideological Litmus Tests
    The final point I want to bring up here is how codes of conduct should be used. These are not things which should be seen as pseudo-legal or process-oriented documents. If you go this way, people will abuse the system. It is better in my experience to vest responsibility with the maintainers in keeping the peace, not dispensing out justice, and to have codes of conduct aimed at the former, not the latter. Justice is a thorny issue, one philosophers around the world have been arguing about for millennia with no clear resolution.

Sunday, February 7, 2016

Super Bowl 50

Things that happen when your city hosts a Super Bowl:

  • There's a fair amount of extra traffic on the roads.
    The one-hour trek on Highway 101 down the Peninsula to the South Bay might stretch to two hours next week. The Embarcadero and Market Street closures near Super Bowl City, the fan village, are causing painfully slow trips around San Francisco's Ferry Building. Already-jammed BART and Caltrain parking lots could fill by 6 a.m. And 600 or more charter buses will be used to carry fans to the Feb. 7 game at Levi's Stadium in Santa Clara.
  • Everybody is visiting San Francisco, even though the game isn't played there.
    While the game is in Santa Clara, nine days of activities leading up to the game are in San Francisco. That means transportation impacts from January 23 to February 12. Whether you're visiting, working or a resident, plan ahead, pack your patience and take transit, bike or walk where you need to go.
  • There are special events, both free, and paid, which you can go to.
    Super Bowl City presented by Verizon is the Host Committee’s free-to-the-public fan village designed to celebrate the milestone Super Bowl 50 and to highlight its unique place in the Bay Area.

    The NFL Experience driven by Hyundai, pro football’s interactive theme park, will be hosted by the Bay Area during Super Bowl Week. To be located at the Moscone Center in San Francisco, the NFL Experience celebrates the sport’s history and electrifying atmosphere of Super Bowl.

  • Some of the events are so popular that you can't get in.
    San Francisco police turned away thousands of people from Super Bowl City late Friday when the event reached maximum capacity.
  • It's hard to find a hotel room.
    "We're starting at $1,500 for the regular rooms, suites go up to $10,000 a night and we have a four day minimum that we require," said Roger Huldi, the General Manager at the W Hotel.
  • But maybe you can rent somebody's spare bedroom, cheap.
    There are simply too many rooms and not enough guests. "You get a flood of people listing their places and nobody looks at it," says Ian McHenry, a co-founder of research firm Beyond Pricing, which sells rental hosts a service to help calculate how much they should charge. "There’s way too much supply in the market." Of the nearly 10,000 currently active Airbnb listings in the Bay Area this weekend, around 60 percent are still available, according to the San Jose Mercury News.
  • There's been a significantly-noticable increase in the number of private charter jets flying in and out.
    Private jet companies say this year's game could near or top records for previous Super Bowls, given the attractive location, large number of private jet airports in the area and the excitement over the game.

    Companies say business is up between 10 and 20 percent over last year. And while consolidated numbers are hard to come by, experts estimate between 1,000 and 1,500 jets could arrive at Oakland, San Jose, Hayward and other California airports.

  • But, uhm, don't try to fly your drone.
    “Temporary Flight Restrictions will prohibit certain aircraft operations, including unmanned aircraft operations, within a 32-mile radius of the stadium in Santa Clara, Calif. on game day,” reads a statement from the Federal Aviation Administration.
  • Everyone who's anyone will be there: Jet-setters swooping into Bay Area for Super Bowl 50.
    Usually, about a half-dozen private jets might use Panico’s facility at any one time. This weekend, there could be as many as 200. While their owners are attending the modest little sporting event just down the Nimitz Freeway, the planes will be parked wingtip to wingtip. So many $50 million jets will be slumming it in Hayward that the airport will shut down one runway and turn it into a temporary parking lot.
  • The weather cooperates, and is just unbelievably nice.
    Plentiful sunshine. High 73F. Winds NW at 5 to 10 mph.
  • There's plenty of other entertainment. (My co-worker's son is the drummer for the Latin Youth Jazz Ensemble!)
    For the musically inclined, there are several acts lined up in the Bay Area to get you ready for the game. Performing at the City Stage in San Francisco on Sunday will be the Latin Youth Jazz Ensemble (3 p.m. ET), John Brothers Piano Company (3:45 p.m. ET) and the Glide Ensemble (5 p.m. ET).
  • All the media companies have been busy crafting their advertisements.
    Super Bowl ads are practically an event unto themselves. And when they unfold on the screen this Sunday, viewers will see a reflection of America's diversity.

    While Hollywood faces a backlash over an all-white slate of acting nominees for this year’s Oscars, several of the TV spots airing during the big game will feature actors, athletes and characters who represent a range of ethnicity, generations, and sexual orientations.

  • The Goodyear Blimp spends all week flying around the area.
    "We're getting the beauty shots for the networks," said the photographer. "You know, downtown, sunsets."

    And what do they get in exchange? Priceless advertising seen by millions of fans from the ground and from their living rooms.

  • And, of course, at some point during the day there will be a football game of sorts.

Have fun, everyone! I think I'll be reading my newspaper and grilling ribs, although I'm sure I'll tune it for the ads.

Friday, February 5, 2016

It's not just a game, ...

... it's LARPing at Moszna Castle: The Witcher School

The Witcher School is a LARP for adults inspired by "The Witcher" series and the fantasy book series by Andrzej Sapkowski.

During the game you will become an apprentice going through a rigorous witcher training: you will learn fencing, archery and alchemy; you will hunt monsters, unveil secrets and intrigues; and finally, you will face tough choices and discover the consequences the hard way. You will move to Moszna Castle in Poland, redecorated for our needs and transformed into a real witchers' abode where you will meet famous characters known from "The Witcher" books and games.

The "Design Document" goes on to elaborate:

Up to this point, you could only read about this or see this on the screen of your TV or PC. But here... if you want to slash your enemy with a sword you do not have to imagine doing that, or push a button on your controller. What you have to do here is to do it for real. You will learn how to brew potions using real ingredients, and what gestures are needed to use witcher signs. And these are only a few of the things you will be able to learn. You will be an integral part of the live action. This will be an opportunity to immerse yourself in a well-known setting of The Witcher and live your own unforgettable adventure.

Thursday, February 4, 2016

Some links on large object support in git

  • Storing large binary files in git repositories
    there are multiple 3rd party implementations that will try to solve the problem, many of them using similar paradigm as a solution. In this blog post I will go through seven alternative approaches for handling large binary files in Git repositories with respective their pros and cons.
  • Git Annex vs. Git LFS
    my experience with Annex is that it’s full-featured and a bit less focused in its approach. It’s easy enough to check in files and sync them among various locations, but there are also testing tools, a web-based GUI, and lots of options you can use in different situations. The git-annex project site reveals a lot: plenty of features, updates, discussions, and enough threads that some sort of trail off.

    Git LFS is at the other end of things: a bit nicer-looking, a bit more straightforward, and significantly simpler. Tack it on to your repository, tell it what kind of files to watch, and then pretty much forget about it. If you check in a file (with a normal git add whatever.mp4), the magic happens via a pre-push hook where LFS will check your watch list and spring into action if needed. It otherwise blends in after minimal configuration.

  • git-annex v6
    This new unlocked file mode uses git's smudge/clean filters, and I was busy developing it all through December. It started out playing catch-up with git-lfs somewhat, but has significantly surpassed it now in several ways.

    So, if you had tried git-annex before, but found it didn't meet your needs, you may want to give it another look now.

Monday, February 1, 2016

Stuff I'm reading, early February edition

If the groundhog were to look today, she would DEFINITELY see her shadow, at least here in my neighborhood.

  • All Change Please
    The combined changes in networking, memory, storage, and processors that are heading towards our data centers will bring about profound changes to the way we design and build distributed systems, and our understanding of what is possible. Today I’d like to take a moment to summarise what we’ve been learning over the past few weeks and months about these advances and their implications.
  • High-Availability at Massive Scale: Building Google’s Data Infrastructure for Ads
    While most distributed systems handle machine-level failures well, handling datacenter-level failures is less common. In our experience, handling datacenter-level failures is critical for running true high availability systems. Most of our systems (e.g. Photon, F1, Mesa) now support multi-homing as a fundamental design property. Multi-homed systems run live in multiple datacenters all the time, adaptively moving load between datacenters, with the ability to handle outages of any scale completely transparently. This paper focuses primarily on stream processing systems, and describes our general approaches for building high availability multi-homed systems, discusses common challenges and solutions, and shares what we have learned in building and running these large-scale systems for over ten years.
  • Immutability Changes Everything
    It wasn't that long ago that computation was expensive, disk storage was expensive, DRAM (dynamic random access memory) was expensive, but coordination with latches was cheap. Now all these have changed using cheap computation (with many-core), cheap commodity disks, and cheap DRAM and SSDs (solid-state drives), while coordination with latches has become harder because latch latency loses lots of instruction opportunities. Keeping immutable copies of lots of data is now affordable, and one payoff is reduced coordination challenges.
  • To Trie or not to Trie – a comparison of efficient data structures
    I have been reading up a bit by bit on efficient data structures, primarily from the perspective of memory utilization. Data structures that provide constant lookup time with minimal memory utilization can give a significant performance boost since access to CPU cache is considerably faster than access to RAM. This post is a compendium of a few data structures I came across and salient aspects about them
  • POPL 2016
    Last month saw the 43rd edition of the ACM SIGPLAN-SIGACT Symposium on the Principles of Programming Languages (POPL). Gabriel Scherer did a wonderful job of gathering links to all of the accepted papers in a GitHub repo. For this week, I’ve chosen five papers from the conference that caught my eye.
  • NSA’s top hacking boss explains how to protect your network from his attack squads
    NSA tiger teams follow a six-stage process when attempting to crack a target, he explained. These are reconnaissance, initial exploitation, establish persistence, install tools, move laterally, and then collect, exfiltrate and exploit the data.
  • Amazon’s Experiment with Profitability
    Amazon Chief Executive Officer Jeff Bezos has spent more than two decades reinvesting earnings back into the company. That steadfast refusal to strive for profitability never seemed to hurt the company or its stock price, and Amazon’s market value (now about $275 billion) passed Wal-Mart’s last year. All the cash it generated went into infrastructure development, logistics and technology; it experimented with new products and services, entered new markets, tried out new retail segments, all while capturing a sizable share of the market for e-commerce.
  • I Hate the Lord of the Rings
    A software developer explains why the Lord of the Rings is too much like work, and why Middle Earth exists in every office.
  • Startup Interviewing is (redacted)
    Silicon Valley is full of startups who fetishize the candidate that comes into the interview, answers a few clever fantasy coding challenges, and ultimately ends up the award-winning hire that will surely implement the elusive algorithm that will herald a new era of profitability for the fledging VC-backed company.
  • Startup Interviewing is (redacted)
    Silicon Valley is full of startups who fetishize the candidate that comes into the interview, answers a few clever fantasy coding challenges, and ultimately ends up the award-winning hire that will surely implement the elusive algorithm that will herald a new era of profitability for the fledging VC-backed company.
  • Inverting Binary Trees Considered Harmful
    he was like - wait a minute I read this really cute puzzle last week and I must ask you this - there are n sailors and m beer bottles and something to do with bottles being passed around and one of the bottles containing a poison and one of the sailors being dishonest and something about identifying that dishonest sailor before you consume the poison and die. I truly wished I had consumed the poison instead of laboring through that mess.
  • "Can you solve this problem for me on the whiteboard?"
    Programmers use computers. It's what we do and where we spend our time. If you can't at least give me a text editor and the toolchain for the language(s) you're interested in me using, you're wasting both our time. While I'm not afraid of using a whiteboard to help illustrate general problems, if you're asking me to write code on a whiteboard and judging me based on that, you're taking me out of my element and I'm not giving you a representative picture of who I am or how I code.

    Don't get me wrong, whiteboards can be a great interview tool. Some of the best questions I've been asked have been presented to me on a whiteboard, but a whiteboard was used to explain the concept, the interviewer wrote down what I said and used that to help frame a solution to the problem. It was a very different situation than "write code for me on the whiteboard."