Hacker News new | past | comments | ask | show | jobs | submit login
IBM to buy HashiCorp in $6.4B deal (reuters.com)
557 points by amateurhuman 10 days ago | hide | past | favorite | 374 comments





Recent and related:

IBM nearing a buyout deal for HashiCorp, source says - https://news.ycombinator.com/item?id=40135303 - April 2024 (170 comments)


Well, it was nice while it lasted! HashiCorp always felt like a company made by actual engineers, not "bean counters". Now it will just be another cog in the IBM machine, slowly grinding it down, removing everything attractive, just like RedHat and CentOS.

Hopefully this will create a new wave off innovation, and someone will create something to replace the monopoly on IaC that IBM now owns.


A lot of the people I respected from Heroku went there, glad they got a chance to use their skills to build something useful and profitable; glader still that they got their payout.

Sadly I echo your sentiment about the future, as someone who has heard second-hand about the quality of work at modern Redhat.

I am wondering how many more rounds of consolidation are left until there is no more space to innovate and we only have ossified rent-seeking entities in the IT space.


Heh at “got their payout”. HashiCorp IPO’d at $80, employees are locked up for 6 months. This sale is at $35.

Wow IBM got quite the discount!

The stock was at $31. The $80 level was just shortly after the IPO. They paid fair market price

They IPO'd in 2021.

Yes. And many of the Heroku employees you speak of would have got RSUs that owed taxes on an $80 basis, been trading far below that for most of that time, and now have a maximum expected value of $35.

This is not a pay day for many people. Anybody who got a pay day were those that could liquidate in the IPO.


Yeah okay, if you had 0.15% stock you're still out with $ 10M.

Smaller and bigger percentages will be different but that's retirement money for hundreds and hundreds unless you pretend to live in very high CoL area. Also, most of them will likely have to keep working thereyears before cashing out some other millions likely.


It's a little more complicated than that.

First of all your percentage of ownership is unrealistic. I joined in November 2019 and got a grant of a few thousand RSUs that fully vested before I left, and that I still have most of, plus I bought some shares in a few rounds of our ESPP when that became available -- as of today I have just under 5,000 shares. HashiCorp has nearly 200 million shares issued, so I own a hair over .0025% of the company. Really early employees got relatively big blocks of options but nobody I knew well there, even employees there long enough to be in that category (and there were very few of them still around by December 2021), was looking at "fuck-you money" just from the IPO.

Second, the current price isn't the whole story for employees. I had RSUs because of when I joined so the story might have been different for earlier employees who had options, but I don't think it differs in ways that matter for this discussion. As background for others:

* On IPO day in December 2021, 10% of our vested RSUs were "unlocked" -- a bit of an unusual deal where we could sell those shares immediately (or at any later time). Note "vested" there -- if you had joined the day before the IPO and not vested any RSUs yet, nothing unlocked for you. (Most of the time, as I understand it, you don't have any unlocked shares as an employee when your company IPOs -- you get to watch the stock price do whatever it does, usually go down a lot, for six months to a year.) * At a later date, if some criteria were met (which were both a report of quarterly earnings coming out and some specific financial metrics I forget), an additional tranche of vested shares (I think an additional 15%) unlocked -- I believe this was targeted at June 2022 and did happen on schedule. * After 1 year, everything vested unlocked.

At the moment of the IPO the price was $80, but it initially climbed into the $90's pretty fast. At one point, during intraday trading, it actually (very briefly) broke just above $100.

So, if you were aware ahead of time that the normal trajectory of stock post-IPO is down, and if you put in the right kind and size of limit orders, and if you were lucky enough to not overestimate the limit and end up not selling anything at all, then you could sell enough shares while it was up to cover the taxes on all of it and potentially make a little money over that. I was that lucky, and managed to hit all of those conditions while selling almost all of my unlocked shares (I even managed to sell a small block of shares at $100), plus my entire first post-IPO vesting block, and ended up with enough to cover the taxes on the whole ball of already-vested shares, plus a few grand left over. Since then, I haven't sold any shares except for what was automatically sold at each of my RSU vesting events.

For RSUs not yet vested at the IPO, the IPO price didn't matter because they sold a tranche of each new vesting block at market price to cover the taxes on them when they vested -- you could end up owing additional taxes but only, as I understand it, if the share price rose between vesting and sale of the remaining shares in the block, so you would inherently have the funds to pay the taxes on the difference. (And if the price fell in that time, you could correspondingly claim a loss to reduce your taxes owed.)

There were a fair number of people who held onto all their shares till it was way down, though, and had to sell a lot to cover their tax bill in early 2022 -- I think if you waited that long you had to sell pretty much all your unlocked shares because the price was well down by tax time (it bottomed out under $30 in early March 2022, then rose for awhile till it was back up over $55 right before tax day, so again, if you were lucky and bet on the timing right, you didn't end up too bad off, but waiting till the day before April 15 was not something I bet a lot of people felt comfortable doing while they were watching the price slide below $50 in late February). I even warned one of the sales reps I worked with, while the price was still up, about the big tax bill he should prepare for, and he was certain I was wrong and that he would only be taxed when he sold, and only on the sale price. (He was of course wrong, but I tried...)

The June unlock was pretty much irrelevant for me because by that point the share price was down under $30 -- it spent the whole month of June after the first week under $35. The highest it went between June 30, 2022 and today, was $44.34. The entire last year it's only made it above $35 on three days, and only closed above $35 on one of them. I figured long-term the company was likely to eventually either become profitable, or get bought, and in either case the price would bump back up.

I was thinking about cutting my losses and cashing out entirely when it dropped below $30 after the June layoffs, and again in November when it was below $20, and then yet again when I left the company in January of this year, but the analyst consensus seemed to be around $32-34 through all of that so I held on -- kinda glad I did now instead of selling at the bottom.


> if you had 0.15% stock you're still out with $ 10M.

... Barely any employees could have that much stock. There's 2200 employees from the most recent data I see. Even if the outstanding shares were 100% employee owned, a uniform allocation would at best see a 0.045% between them all. Obviously, the shares are not uniformly distributed across employees, nor is hashicorp 100% employee owned.


You've misunderstood my point. RSUs became taxable at the $80 stock price for many. Depending on where you're based that could mean you owe(d) anywhere from from $22 - $38 per share in taxes. At the top end of that range, if you're still holding any stock, this acquisition has just permanently crystalised a capital loss for you. There's no upside that gets you above what you owe/paid in taxes.

There are many many people who made a loss on this, even before the acquisition announcement.

Also I think your ownership % is way off. There's a pretty small group of people, most of them the earliest employees + execs, who would have got out with $10M. HashiCorp currently has thousands of employees and would have churned through thousands more over the years.


I don't know how pre-public to IPO RSUs work but let's do some math assuming IPO day is "day when RSUs vest":

IPO day and you get 1000 RSUs unlocked/vested. Share price is $80. You made 80k gains. For simplicity let's say you owed 40K in taxes.

One of two things happens:

- Hasihcorp auto sells to cover and you get 500 less shares. - You need to pay your taxes on your own and earmark 40K.

Let's pick the easy one: If Hashicorp sold for you that day you are now sitting on 500 shares with a cost basis of $80.

Let's go to today, IBM buys and the person held. 500 shares are now were $35 so the value is $17,500.

You cash out -- getting 17,500 in your account, and a capital loss of $22,500.

Sure, 17K isn't as cool as 40K, but the person still "made money" just _less_. You make it sound like this person is now "underwater" because they had a capital loss.

=====

And kids at home, this is why you sell some/all of your RSUs as you get them. No one company should be more than 15% of your portfolio. Even the one you work at.


    > No one company should be more than 15% of your portfolio. Even the one you work at.
Tell that to the guy who went all-in for NVidia employee share purchase plan and is worth more than 50M USD. (I think it was a Register article posted here recently.) Sometimes the gamble is worth it. That said, for every one of those once-in-a-lifetime stories, there are many, many more about engineers who walked away from post-IPO start-ups with very little wealth gained. So many have posted here before, it just isn't worth it.

I don't need to make any assumptions about anything here, other former colleagues have gone through the specifics in other replies. Nothing is auto-sold at IPO to cover taxes, a maximum of 10% of what had vested was allowed to be sold before the 6mo lockup expired. There was a total of a few weeks before a combination of trading blackout window, lockup, and market crash conspired to have make it easy to be underwater if you hadn't elected to sell everything you could coming into the IPO.

_A lot_ of people ended up with a loss.


Ok -- I need your help. I'm missing something here.

People got RSUs. They owed tax on said RSUs. The tax cannot be higher than the value of the RSU at the time of vest.

If people did not have enough cash to pay their tax bill, and did not sell enough RSUs to get cash to pay said tax bill, then yes, I can see those people "with a loss" because they had a "surprise" tax bill, RSUs price went down and a cash problem now. Is this what you mean happened?

They shouldn't have had to "sell everything" -- at most like 50%.

I'm arguing with you here because this stuff is complex, and many people shy away from trying to understand it, and that's a huge disservice for those in our industry.

For anyone reading along -- It's as simple as this: understand the tax implications of the assets you own, pay your taxes.


That's part of the surprise - I can't speak to the specifics for US citizens more than others in this post have as I'm not based there. Tax definitely wasn't determined _at time of vest_ for anybody though, it was time of liquidity.

In Australia we were granted options, which ordinarily are taxed at time of exercise. Lots of people were surprised to discover, almost a full year after the IPO, that those options were also subject to a tax deferral scheme and any tax already paid at exercise wasn't sufficient. The actual taxable amount determined by HashiCorp and the ATO was the $80 IPO price. If you sold the full amount you were entitled to (10% of your vested holdings) at the IPO you were probably fine. If you sold nothing, because you thought you had already paid the required taxes, by the time you received the tax statement the value of your stock would have been less than what you owed in taxes.


I’m pretty sure U.S. law requires companies to withhold at 22% (or optionally higher) for any bonus/non-salary payments, which includes RSU vesting. Companies can choose to either “sell to cover” or just issue a proportionally lower amount of shares (e.g. you vested 1000 shares but only 780 show up in your brokerage account).

The problem occurs when 22% isn’t enough, which is often the case.


The taxes are computed using the IPO price, not the price at opening or closing on the first day of trading.

IPO price was $35.


IPO price was $80. Briefly touched slightly above $100, and then crashed with the rest of the market and has spent most of its time since below $30.

What are you talking about? The December 2021 IPO price was $80.

What? What's their strike price? If they are above the sale price their return is 0.

RSUs are regular shares, folks with options would have a different story.

It always amazing me how people play telephone with Red Hat and how bad the quality of life is post IBM.

When they show the service awards they don’t even cover 5 years because they don’t have all day.

If it was so bad then you wouldn’t see engineers with 10, 15, or 20 years experience staying there. They already got their money from the IBM purchase so if it were bad then they would leave.

Oh but they don’t innovate anymore.

Summit is coming. Let’s see what gets announced and then a live demo.


> If it was so bad then you wouldn’t see engineers with 10, 15, or 20 years experience staying there. They already got their money from the IBM purchase so if it were bad then they would leave.

Every big, old, stagnant company is full of lifers who won’t move on for any number of reasons. The pay is good enough, at least it’s stable, the devil you know is better than the devil you don’t, yada yada yada. There are people in my life who work in jobs like that. They will openly admit that it sucks, but they are risk averse due to a combination of personality and family circumstances, so they stick it out. Their situation sucks, and they assume everything else sucks too. And often, because they’ve only worked in one place so long, they have a hard time finding other opportunities due to a combination of overly narrow experience and ageism.

The movie Office Space is about exactly the sort of company that is filled with lifers who hate their jobs but stay on the path of least resistance.

(I know absolutely nothing about working at Red Hat, so I’m not trying to make a specific claim about them. But I’ve known people in this situation at IBM and other companies that are too big for their own good.)


> they have a hard time finding other opportunities due to a combination of overly narrow experience and ageism

I too know several lifers at IBM. One thing I've realized is that staying loyal to a company over several years won't save you from ageism.

Your best defense against ageism may be to save more than 50% of your tech income for about 20 years, then move into management and build empires until the music stops.


Red Hat Principal Consultant here, July will be 7 years at the company for me.

Before IBM purchase: travel to clients, build and/or fix their stuff, recommend improvements

After IBM purchase: travel to clients, build and/or fix their stuff, recommend improvements

At least on my side of the aisle I haven't noticed any notable changes in my day to day work for Red Hat. IBM has been very light touch on our consulting services.


    > Oh but they don’t innovate anymore.
IBM was #4 in the US last year for patents here: https://www.ificlaims.com/rankings-top-50-2023.htm

Patents are a stronger signal of a company focused on financial engineering than a company focused on innovation.

our current economic model kind of depends on the idea that we can always disrupt the status quo with american free market ingenuity once it begins to stagnate but maybe we have reached the limits of what friedman's system can do or accounted for.

The American market is highly over regulated and most market libertarians would argue it hasn't been "free" in a long long time.

I don’t understand people’s beef with IBM. They have been responsible for incredible R&D within computing. I even LIKE redhat/fedora!

HashiCorp had already been sold out since waaaay before this acquisition and I also don’t understand why their engineers are seen as “special”…


People's beef here with IBM is they don't make shiny phones and laptops and don't create hip jobs where you're paid 500k+ to "change the world" by selling ads or making the 69th messaging app.

They just focus on tried and tested boring SW that big businesses find useful and that's not popular on HN which is more startup and disruption focused.


This is unnecessarily dismissive.

While Hashicorp hasn’t been exciting for a while, I fail to see how an acquisition from IBM will invigorate excitement, much less even a neutral reaction from many developers.

Hashicorp had a huge hand in defining and popularizing the modern DevOps procedures we now declare as best practices. That’s a torch to hold that would be very difficult for a business like IBM.

Perhaps I missed some things but the core of Ansible feels like it’s continuing it’s path to be much less of a priority over the paid value-adds. I can’t help but to think the core of Hashicorp’s products will go down this path, hence my pessimism.


> This is unnecessarily dismissive.

No, it is not. HN has both a "greybeard" audience that will cheer in "Go boring tech" posts and an "hipster" audience that is heavily start-up and disruption focused as GP was saying. When talking about IBM and acquisitions or similar topics, it's usually the second audience that speaks more.

That doesn't mean that some acquisition really kill the product, but you don't need to be as big and old as IBM to do that.


Do you mean Terraform, not Ansible?

IBM owns Ansible, redserk is saying Terraform will go a similar route. Although I don't see what they mean by core being lower priority than paid. The paid features are all available for free via AWX, which is the open source upstream of the paid product AAP.

Red Hat's business model is "Hellware"--the open source versions are designed to be incredibly difficult to install/manage/upgrade or without any kind of stability that you're forced to pay for their versions.

There are a number of valid criticisms about IBM

IBM repeatedly cleaning house of anyone approaching (let alone in or even rarely beyond) middle age is abhorrent.

It's funny to characterise people's beef with IBM as that they're boring, old, and stale when IBM are apparently allergic to anyone over 40.

Also their consultants have been some of the most weaponised incompetence laden, rude, and entitled idiots I've ever had the sincere displeasure to deal with.

IBM are an embarrassment to their own legacy imo.


Yeah I mean I feel you but imo this is just what the world is. Ive been fucked over many times in my career…people just have to learn to fuck back.

I was more so commenting on the HN hate for the technology/products aspect. IBM has accomplished FAR more than hashicorp and everyone here acts like they were gods gift to software.


My beef with IBM as someone who worked for a company they acquired is that they would interfere with active deals that I was working on, force us to stand down while IBM tried to sell some other bullshit, then finally “allow us” to follow up with the customer once it’s too late, and the customer decided to move on to something else. Repeatedly.

Fuck IBM.


You have obviously never been the victim of IBMs consulting arm. I caution anyone against buying anything IBM now. Absolute nightmare to work with.

IBM’s consulting arm was finally so radioactive that they spun it out into a new company (Kyndryl). What I’ve seen is that customers still have a low opinion of the new company and they continue to refer to it as IBM.

Kyndryl is IBM??

Yes and you wouldn't believe how bad they are. We had multiple incidents where colleagues had to explain basic stuff to them and hold their hands. I was in a couple of calls with their engineers and those instantly reduced my impostorship syndrome.

I worked for several years with IBM solutions and the like, I thought they ended up opening near shore centers in Europe to "sell" "local" ressources but it was just detached Indian employees from upper cast billed more than us as they were IBM experts.

or just work anywhere within IBM

Nah dude. Their business internal is a dinosaur both in girth and age. If they estimate 2 years for you, put away budget for 10. And all you’re gonna get is excuses and blame.

IBM took away the ability of CentOS to be a free and trivial to swap-in alternative to the paid product RedHat Enterprise. That RedHat was already in financial trouble due to self-cannibalizing their own paid product is irrelevant; emotionally, “IBM” – not “RedHat” – made the decision to stop charging $0 for their custom enterprise patchsets and release trains, and so IBM will always be the focus of community ire about RedHat’s acquisition.

I expect, like RedHat, that the Hashicorp acquisition will result in a lot of startups that do not need enterprise-grade products shifting away from “anything Hashicorp offers that needs to charge money for Hashicorp to stay revenue-positive” and towards “any and all free alternatives that lower the opex of a business”, along with derogatory comments about IBM predictably assigning a non-$0 price for Hashicorp’s future work output.


* Red Hat wasn't ever "in financial trouble" -- their revenue line was up-and-to-the-right for a ridiculous number of consecutive quarters. Even when they missed overall earnings estimates, it was rarely by much and they still usually beat EPS estimates for the quarter.

* IBM had little to do with Red Hat's maneuvers around CentOS (I worked at Red Hat for several years and still have friends there, and nothing anybody there said publicly about CentOS in 2020 or 2023 was materially different from things people there were saying about it internally in 2012). Some people have tried to blame IBM for a general culture shift but as far as I've seen, every bit of the CentOS debacle was laid squarely at the feet of Red Hat staff by most in this industry -- as it should have been, since most of those involved were employed there well before IBM bought the company.

IBM's reputation as an aging dinosaur was well-earned long before it bought Red Hat, and continues to be earned outside it. That earned reputation was why they bought RHT in the first place: IBM Cloud market share was (and still is) declining and they wanted a jumpstart in both revenue and engineering credibility from OpenShift in particular.


IBM was taken over by bean counters years ago. There were researchers and others that would literally skip being in or find a way to avoid bean counters when they walked through IBM Research Labs (like Almaden Research Center) years ago (heard from multiple people years back that were working on contracts/etc there - mainly academics).

Also, IBM has been extremely ageist in their "layoff" policies. They also have declined in quality by outsourcing to low cost/low skill areas.


I knew a guy who was laid off from IBM specifically for being older, which came out years later as part of the class action lawsuit...

There is a former column that was under multiple writers (same name), that did a great expose on IBM and age discrimination, but I don't want to give said column their due since the columnist had other issues.

If it's really their due, you should give it to them. This value system where you have to punish people if they don't have the "right" views needs to stop. Would you like someone to do that to you? If they did good work, it doesn't get infected by whatever "issues" they had.


Like Bourbaki? Or they all happened to share a name?

I never worked there, but I worked at a security company that hired a bunch of ex-IBM X-Force security guys, and they hated IBM with a passion.

Self selection, to be sure, but their beefs were mostly about the crushing bureaucracy that was imposed on what was supposed to be a nimble type domain; (network) security is, after all, mostly leapfrog with the black hats.


I just got to spin down a bunch of infra that was originally in Softlayer, which IBM acquired years ago. IBM were terrible to work with, they frequently crashed services by evacuating VMs from hosts and then not powering them back up, and only notifying us long after our own monitoring detected it. Won't miss that.

IBM is to software as Boeing is to planes.

I will not be taking questions ;-)


I have the "honor" of getting to use IBM $PRODUCT at $COMPANY.

- it uses some form of consensus algorithm between all nodes that somehow manages to randomly get the whole cluster into a non working state by simply existing, requiring manual reboots

- Patches randomly introduce new features, often times with breaking changes to current behaviour

- Patches tend to break random different things and even the patches for those patches often don't work

- For some reason the process how to apply updates randomly changes between every couple of patches, making automation all but impossible

- the support doesn't know how $PRODUCT works, which leads to us explaining to them how it actually does things

- It is ridiculously expensive, both in hardware and licensing costs

All of this has been going on for years without any signs of improvement for now, to the point that $COMPANY now avoids IBM if at all possible


Look at what they did with the Phoenix project for Canadian Government. They are not the same IBM they were 50 years ago. Now they are a consulting firm that employ cheap labor.

https://news.ycombinator.com/item?id=15303555


IBM has always been a punching bag.

I had been wondering who would buy HCP, I sort of figured it was either going to be AWS, Google, or Azure and then I figured the other vendor were going to have support removed (maybe gradually, maybe not.)


It could have been worse: It could have been Oracle.

One of the reasons I left when I did was that it was starting to get really obvious that an acquisition was likely and I desperately did not want my work e-mail address to end in oracle.com.

Or Broadcom...

You talk about beef, look at what they did with a project for Canadian Government. They are not the same IBM they were 50 years ago. Now they are a consulting firm and a shitty one.

https://news.ycombinator.com/item?id=15303555


So which of the other potential buyers of HCP is the magical non-shitty $BIGCORP you would’ve preferred?

I would have like Microsoft to buy it.

Look at what they did with the Phoenix project for Canadian Government. They are not the same IBM they were 50 years ago. Now they are a consulting firm.

https://news.ycombinator.com/item?id=15303555


It was special when Mitchell Hashimoto was still at the helm.

Watson

Hashi code, such as Terraform, was (is) an amazing example of a good reference Go codebase. It was very hard for me to get into Go because, outside of the language trivia and hype, it was hard to learn about the patterns and best practices needed for building even a mid-sized application.

That's interesting. I found Go to be a very productive and easy language, coming from Typescript.

But I had a similar experience like yours with PHP, I just couldn't get into it.


After having written probably over 100k lines of Go code, my impression is that Go is simple, but not easy. The language has very few features to learn, but that results in a lot of boilerplate code and there are more than a few footguns burried in the language itself. (My favorite [1])

I find it very hard to write expressive, easy to read code and more often than not I see people using massive switch-case statements and other, hard to maintain patterns instead of abstracting away things because it's so painful to create abstractions. (The Terraform/OpenTofu codebase is absolutely guilty of this btw, there is a reason why it's over 300k lines of code. There is a lot of procedural code in there with plenty of hidden global scope, so getting anything implemented that touches multiple parts typically requires a lot of contortions.)

It's not a bad language by any stretch, but there are things it is good at and things it is not really suited for.

[1]: https://gist.github.com/janosdebugs/f0a3b91a0a070ffb067de4dc...


I’ve always found that the Go language is simple in all the ways that don’t matter.

(In contrast to languages like Haskell and Clojure, which are simple in most of the ways that matter.)


Compilation speed matters, among other things, and monomorphization is often costly.

Is it because secondSlice is a reference (pointer?) to firstSlice?

Slices are structures that hold a pointer to the array, a length, and a capacity!

So, when you slice a slice, if you perform an array operation like “append” while there is existing capacity, it will use that array space for the new value.

When the sliced value is assigned to another variable, it’s not a pointer that’s copied, it’s a new slice value (with the old length). So, this new value thinks it has capacity to overwrite that last array value - and it does.

So, that also overwrites the other slice’s last value.

If you append again, though, you get a (new) expanded array. It’s easier to see with more variables as demonstrated here: https://go.dev/play/p/AZR5E5ALnLR

(Sorry for formatting issues in that link, on phone)

Check out this post for more details: https://go.dev/blog/slices-intro


It's because slices have underlying arrays which define their capacity (cap(s)).

Both slices start out having the same underlying (bigger) array -so appending to one slice can affect the other one.

In the "bonus" part, though, the appends outgrew the original array, so new underlying arrays were allocated (i.e. the slices stopped sharing the same backing array).

Thanks for the heads-up, janosdebugs :)


Yes-ish? Slices are this weird construct where they sometimes behave like references and sometimes not. When I read the explanation, it always makes sense, but when using them it doesn't. For me the rule is: don't reuse slices and don't modify them unless you are the "owner" of the slice. Appending to a slice that was returned to you from a function is usually a pretty good way to have a fun afternoon debugging.

I find the claims that Go is easy just wrong. It's actually a harder language to write in because without discipline, you are going to end up maintaining massive amounts of boilerplate.

That's from someone who did a bunch - Perl, Ruby, Python, Java, C++, Scala.

Syntax is one thing, assembling an application with maintainable code is something else.


What in particular did you find difficult building a maintainable codebase in Golang? Not quite understanding the boilerplate reference.

Code generation in Golang is something I've found removed a lot of boilerplate.


I am not used to writing code where 2/3 of it is "if err" statements.

Also, refactoring my logging statements so I could see the chain of events seemed like work I rarely had to do in other languages.

It's a language the designers of which - with ALL due respect - clearly have not built a modern large application in decades.


Yes because other language just hide errors from the user.

I think the reason people find go a bit annoying with the error condition is because go actually treats errors as a primary thought, not an after thought like Python, Java.


I assume you're talking about languages with exceptions when saying "other language just hide errors from the user." I think that's a gross over-simplification of exception-based error handling. I generally do prefer explicit, but there are plenty of cases where exceptions are clearly elegant and more understandable.

My preference is a language like Elixir where most methods have an error-code returning version and a ! version that might raise an exception. Then you (the programmer) can choose what you need. If you're writing a controller method that is for production important code, use explicit. If you're writing tests and just want to catch and handle any exception and log it, use exceptions. Or whatever makes the most sense in each situation.


I've never gotten the explicit argument. Java checked exceptions are also part of the function signature/interface and nothing prevents one from making a language where all exceptions are checked then just doing

    try {
       maybeError := FunctionThrowingValueError()
    } catch (ValueError e) {
       // do stuff
    }
I get at the end of the day it's all semantics, but personally I kinda like the error-specific syntax. If you want to do the normal return path, that's fine, but I prefer the semantics of Rust's Result type (EITHER a result OR an error may be set).

To each their own, it's not something I really worry about.


Yeah same, Go's explicit argument never resonated with me either. In Elixir it's similar to a Result type, being a tuple such as either `{:ok, return_val}` or `{:err, err_msg}`, which is perfect for using with `case` or `with` depending on your situation.

You can't hide an exception if it crashes your program. You can definitely ignore a return from a function, essentially swallowing it. It's the definition of an anti-pattern.

I prefer to handle errors than ignore them. "If err" is actually one of the best things about Go

In most web applications I write, I have one error-handling block.

Access forbidden? Log a warning and show a 403 page. Is is JSON? Then return JSON.

Exception-handling in general is a pretty small part of most applications. In Go, MOST of the application is error-handling, often just duplicate code that is a nightmare to maintain. I just don't get why people insist it's somehow better, after we "evolved" from the brute-force way.


Errors usually happen during IO, but not in the main business logic and those two can be neatly separated.

But If you are coming from java I can understand the single error handling block is more comfortable, but coming from JavaScript/Typescript it's much more easy to check if err != nil, than to debug errors I forgot to handle, during runtime.


I understand were you are coming from but I actually like the explicit error handling in Golang. Things being explicit reduces complexity for me a lot and I find it easier to spot and resolve potential issues. It's definitely something that I can understand not working for everyone.

I agree on the logging point but my experience was the explicit error handling and with good test coverage meant we rarely got into situations were we had non-deterministic situation were we relied extensively on logging to resolve. But we also went through several iterations of tuning how we logged errors. It's definitely a rough edge in what is readily available in the language.


> I understand were you are coming from but I actually like the explicit error handling in Golang. Things being explicit reduces complexity for me a lot and I find it easier to spot and resolve potential issues. It's definitely something that I can understand not working for everyone.

This sound a lot of like Apple user arguments about iPhone 1 missing copy & paste over a decade ago.

I am very pedantic about checking responses for errors, but from my experience when working with a team and existing project I see that people notoriously forget to check the result. TBH it is a pain to essentially repeating the boilerplate `if err !=nil ...`.

What's worse is that even documentation skips checks. For example `Close()` method. It's almost always returning error, but I almost never seen anyone check it.

The reason for it, is if you want to use `defer` (which most people do) you would end up with very ugly code.

The other alternative would be to then making sure you place (and properly handle error) close in multiple places (but then you risk of missing a place).

And other solution would be using `goto` in similar way as it is used in Linux Kernel, but there are people who have big problem with it. I had a boss who religiously was against goto (who did not seem to understand Dijkstra's argument), and asked me to remove it even though it made the code more readable.


I think go makes more sense if you imagine spending more time reading MRs and code than writing it.

Standard go error handling maximises for locality. You don't see many "long range" effects where you have to go and read the rest of the code to understand what's going to happen. Ideally everything you need is in the diff in front of you.

Stuff like defer() schedules de-alloc "near" to where things get allocated, you don't have to think about conditionals. If an MR touches only part of a large function you don't have to read the whole thing and understand the control flow.

The relative lack of abstraction limits the "infrastructure" / DSLs that ICs can create which renders code impenetrable to an outside reader. In a lot of C++ codebases you basically can't read an MR without digging into half the program because what looks like a for loop is calling down into a custom iterator, or someone has created a custom allocator or _something_ that means code which looks simple has surprising behaviour.

A partial solution for that problem is to have a LOT of tests, but it manifests in other ways, e.g. figuring out the runtime complexity of a random snippet of C++ can be surprisingly hard without reading a lot of the program.

I personally find these things make go MRs somewhat easier to review than in other languages. IMHO people complaining "it's more annoying to write" (lacking stronger abstractions available in many other languages) are correct but that's not the whole story.

P.S: For Close(), you're right that most examples skip checking the error and maybe it would be better if they didn't. It only costs a few lines to have a function that takes anything Closable and logs an error (usually not much else you can do) but people like to skip that in examples.

  type Closable interface {
    Close() error
  }

  func checkedClose(c Closable, resourceName string) {
    if err := c.Close(); err != nil {
      log.Printf("failed to close %s: %v", resourceName, err)
    }
  }

Thanks for the Close() example, that's a nice solution, although would it work if you wanted to handle an error (not just log it?)

> Standard go error handling maximises for locality. You don't see many "long range" effects where you have to go and read the rest of the code to understand what's going to happen. Ideally everything you need is in the diff in front of you.

I'm assuming you're comparing to exceptions.

I don't know about that. I think this relies on discipline of the software engineer. I can see for example someone who is strict and only uses exceptions on failures and returns normal responses during usual operation.

With Go you can use errors.Is and errors.As which take away that locality. Or what's worse, you could have someone actually react based on the string of the error message (although with some packages, this might be the only way).

I still see your point though, but I also think Rust implemented what Go was trying to do.

You get a Result type, which you can either match to get the data and check the error, you can also pass it downward (yes, this will take away that locality, but then compiler will warn you if you have a new unhanded error downstream), or you can chose to unwrap without checking error, which will trigger panic on error.


Good points, I think it's fair to claim Result and Option are technically better (when combined with the necessary language features and compile-time checks).

Re: Close() errors yeah most times you would be better off writing the code in place if you really need to handle them. You can make a little helper if you find yourself repeating the same dance a lot. Usually there's not much you can do about close errors though.


Not really understanding the iPhone reference or how it relates here.

Sounds like the problem you have with the error checking relates more to development practice of colleagues than the language.

We used defer frequently. Never considered it ugly.

'goto' (hypothesising here as not used it) and use of exception handling that is expected to be handled at edge of boundary points in codebase can be elegant but does need careful thought and design is my experience. Can hide all sorts of issues, lead to a lot of spurious error handling for those that don't understand the intent. That's the biggest issue I have with implicit (magical) error handling - too many people do it poorly.


Everything is explicit until someone decides to introduce a panic() somewhere... (I get that exists in more or less any language)

That said, in practice I see it following a similar philosophy to java checked exceptions, just with worse semantics.

Personally, I don't like high-boilerplate languages because they train me to start glossing over code, and it's harder for me to keep context when faced with a ton of boilerplate.

I don't hate go. I don't love it either. It's really good at a few things (static binaries, concurrency, backwards and forwards compatibility). I hate the lack of a fully-fleshed out standard library, the package management system is still a bit wonky (although much improved), and a few other aesthetic or minor gripes.

That said there's no language I really love, save maybe kotlin, which has the advantage of the superb java standard library, without all the structural dogma that used to (or still does) plague the language (OOO only, one public class per file, you need to make an anonymous interface to pass around functions, oh wait now we have a streaming API but its super wonky with almost c++ like compilation error messages, hey null pointers are a great idea right oh wait no okay just toss some lombok annotations everywhere).

End of the day though a lot of talented people are golang first and sometimes you just gotta go where the industry does regardless of personal preference. There's a reason scientists are still using FORTRAN after all these years, and why so much heavy math is done in python of all things (yeah yeah I know Cython is a thing and end of the day numpy etc abstract a lot of it out of the way, but a built in csv and json module combined with the super easy syntax made it sticky for data scientists for a reason)


    > I am not used to writing code where 2/3 of it is "if err" statements.
I don't write Go, but I have seen this a lot when reading Go. It seems hard to escape. The same is true for pure C. You really need to check every single function output for errors, else errors compound, and it is much harder to diagnose failures. When I write Java with any kind of I/O, I need careful, tight exception handling so that the exception context will be narrow enough to allow me to diagnose failures after unexpected failures. Error handling is hard to do well in any language.

disagree. k8s is written in it just fine. plus, tons of other modern large applications in enterprise settings

K8s was famously written in Go by ex-Java developers, and the code base was full of Java patterns.

Which kind of proves my point. Even Google struggled to write clean, idiomatic Go.


> Code generation in Golang is something I've found removed a lot of boilerplate.

Not a gopher by any stretch, but to my way of thinking code generation is literally boilerplate, that's why its generated. Or does Go have some metaprogramming facilities I'm unaware of?


I took took the comment to relate to writing boilerplate.

So unrelated to code generated, if that makes sense. The generated code I'm sure had lots of boilerplate, it's just not code we needed to consider when developing.


Not the parent, but I find that doing dependency injection or defensive programming results in a lot of boilerplate. Custom error types are extemely wordy. The language also doesn't allow for storing metadata with types, only on structs as tags, which seriously hampers the ability to generate code. For example, you can't really express the concept of a slice in a slice containing an integer needing validation metadata well. You'll need to describe your data structure externally (OpenAPI, JSON schema, etc) and then generate code from that.

My experience of Golang is that dependency injection doesn't really have much benefit. It felt like a square peg in a round hole exercise when my team considered it. The team was almost exclusively Java/Typescript Devs so it was something that we thought we needed but I don't believe we actually missed once we decided to not pursue it.

If you are looking at OpenAPI in Golang I can recommend having a look at https://goa.design/. It's a DSL that generates OpenAPI specs and provides an implementation of the endpoints described. Can also generate gRPC from the same definitions.

We found this removed the need to write almost all of the API layer and a lot of the associated validation. We found the generated code including the server element to be production ready from the get go.


For OpenTofu specifically, having DI for implementing state encryption would have been really nice. Of you look at the PR, a lot of code needed to be touched because the code was entirely procedural. Of course, one could just make a global variable, but that is all sorts of awful and makes the code really hard to test efficiently. But then again, this is a 300k line project, which in my opinion is way beyond what Go is really good for. ContainerSSH with 35k+ lines was already way too big.

Out of interest what language do you think would have been more appropriate and why?

For that size of codebase I'd have thought code structure and modularisation would be more important than language choice.


I wish I had an answer to that, but I don't know. I only worked on projects of comparable size in Go, Java and PHP. Java was maybe the best for abstractions (big surprise), but it really doesn't lend itself to system-level stuff.

> HashiCorp always felt like a company made by actual engineers.

IDK about this, in 2018 I was in a position to pay for their services. They asked for stupid amount of money and got none because they asked so much.

Can't remember what the exact numbers were but but it felt like ElasticSearch or Oracle.


Inability to price things correctly sounds exactly like engineer behavior to me…

Same. I wanted to pay them for their features, but the pricing was such that I actually thought it was a gag or a troll at first and laughed. When I realized they were serious, I was like Homer fading into the bushes.

Same. And I didn't feel like we were getting anything for that crazy money aside from than "support" (which management wanted, pre IPO, to make a bunch of security audits seem easier ). We preferred to stick with our own tooling and services that we built around Vault (for example) than use the official enterprise stuff. Same goes for terraform today: I don't feel like we need Terraform Cloud, when we've got other options in that space, including home grown tooling.

Vault's client-based pricing was (is) the worst thing about selling it. When I was there, nobody in sales liked it except the SEs and account reps dealing with the largest customers (and those customers loved it because it actually saved them a substantial amount of money over other vendors' models like per-use or per-secret). All the customers except those very largest ones hated it. The repeated response from those who believed in the client-based pricing model, to those of us pointing out the issues with it, was essentially "if your customers don't like it, they must not understand it because you aren't doing a good enough job explaining it".

What I thought we really needed was a "starter/enterprise" dual-model pricing structure, so that smaller customers could get pricing in some unit they could understand and budget for, that would naturally and predictably grow as they grew, to a point where it would actually be beneficial to them to switch to client-based pricing -- but there seemed to be a general reluctance to have anything but a single pricing model for any of our products.


But it's even more expensive now! There's no limit!

The timing of this acquisition, and the FTC's ban on non-compete agreements is perfect.

Usually during an acquisition like this, the key staff are paid out after two years on board the new company. So not a non-compete, but an incentive to stay and get their payout.

Most staff with no equity will leave quickly of course, so the invalidity of non compete will definitely help those souls.


"golden handcuffs" they call them.

Ban isn’t yet in effect and would have started discussions a while back. Plus, FTC ban is already being litigated by business groups, unsurprisingly.

I see this as an opportunity. Not to replace HashiCorp's products - OpenTofu and OpenBao are snapping up most of the mindshare for now - but to build another OSS-first developer darling company.

Btw. OpenTofu 1.7.0 is coming out next week, which is the first release that contains meaningful Tofu-exclusive features! We just released the release candidate today.

State encryption, provider-defined functions on steroids, removed blocks, and a bunch more things are coming, see our docs for all the details[0].

We've also had a fun live-stream today, covering the improvements we're bringing to provider-defined functions[1].

[0]: https://opentofu.org/docs/next/intro/whats-new/

[1]: https://www.youtube.com/watch?v=6OXBv0MYalY


Onboardbase is a great alternative to HashiCorp Vault.

https://onboardbase.com/


i can only speak to the early days (joined around 11 folks), but the engineers then were top tier and hungry to build cool shit. A few years later (as an outsider) seemed innovation had slowed substantially. i still know there are great folks there, but has felt like HashiCorp’s focus lately has been packaging up all their tools into a cohesive all-in-one solution (this was actually Atlas in the early days) and figuring out their story around service lifecycle with experiments like Waypoint (Otto in the early days). IBM acquisition is likely best outcome.

Isn't that how it always is as any company matures? In a big company, you don't need just 5-star devs. You also need a 3-star devs (and even 2-star devs) that work 9 to 3:30 (and maybe do emails/slack between 3:30 - 4; bonus points if they study from 4 to 5). You need people that can take basic requirements and turn them into code that your 5-star devs are too bored to write. You need people who look at customer bugs, can do debugging and submit a patch to fix a corner case your 5-star dev didn't think about 4 years when they were hopped up on caffeine, hopes and dreams.

Honestly, Mitchell should still be very proud of what he built and the legacy of Hashicorp. Sure, the corp has taken a different direction lately but thanks to the licenses of the Hashicorp family of software, it's almost entirely available for forking and re-homing by the community that helped build it up to this point. E.g. opentofu and openbao. I'm sure other projects may follow and the legacy will endure, minus (or maybe not, you never know) contributions from the company they built to try to monetize and support that vision.

My personal opinion is it was a company for crack monkeys. Consul, Vault and Packer have been nothing but pain and misery for me over the last few years. The application of these technologies has been nothing but a loss of ROI and sanity on a promise.

And don't get me started on Terraform, which is a promise but rarely delivers. It's bad enough that a whole ecosystem appeared around it (like terragrunt) to patch up the holes in it.


When a massive ecosystem springs up around a product, that means it’s wildly successful, actually.

The person you are replying to made no statement about the success of the product. Success and pita-ness are completely orthogonal.

Yeah I'm not saying it's not successful. It's just shit!

Regarding Red Hat, I dearly hope someone will replace the slow complicated mess that is ansible. It's crazy that this seems to be the best there is...

Saltstack is IMO superior to Ansible. It uses ZMQ for command and control. You can write everything in python if you want but the default is YAML + JINJA2. And is desired state not procedural.

Not used it for about 5 years and I think they got bought by VMWare IIRC. The only downside is that Ansible won the mindshare so you're gonna be more on your own when it comes to writing esoteric formulas.


I wrote a tool similar to ansible in the old days. We both started about the same time, so wasn't really a goal to compete with it. Later I noticed they had some type of funding from Red Hat, which dulled my enthusiasm a bit. Then Docker/containers started hitting it big and I figured it would be the end of the niche and stopped.

Interesting that folks are still using it, though I'm not sure of the market share.


Why slow and complicated?

We're just starting to implement it and we've only heard good things about it.


Ansible is great if you have workflows where sysadmins SSH to servers manually. It can pretty much take that workflow and automate it.

The problem is it doesn’t go much beyond that, so you’re limited by SSH roundtrip latency and it’s a pain to parallelize (you end up either learning lots of options or mitogen can help). However fundamentally you’re still SSHing to machines, when really at scale you want some kind of agent on the machine (although ansible is a reasonable way to bootstrap something else).


When I managed a large fleet of EC2 instances running CentOS I had Ansible running locally on each machine via a cron job. I only used remote SSH to orchestrate deployments (stop service, upgrade, test, put back in service).

Well, that's exactly what we need. Our servers are growing in numbers and it's a pain in the ass the log into each one of them via SSH and do stuff.

There is Mitogen [0] that helps a bit. Their website also kind of explain some of the issues:

> Requiring minimal configuration changes, it updates Ansible’s slow and wasteful shell-centric implementation with pure-Python equivalents, invoked via highly efficient remote procedure calls to persistent interpreters tunnelled over SSH. No changes are required to target hosts.

Then of course python itself is not very performant and yaml is quite the mess too. With ansible, you have global variables, group level variables that can override them, host level variables that can override those, role level variables, play/book level variables that can override those and ad-hoc level variables that can override all of the above. I am telling you, it can get incredibly messy and needlessly complicated quickly.

As I said though, it's still the best we've got even if not optimal. So I think it's a good idea to implement it to at least have something.

[0]: https://mitogen.networkgenomics.com/ansible_detailed.html


It was this, but hasn’t been for a couple of years at least. The culture really shifted once it was clear the pivot to becoming a SaaS-forward company wasn’t taking off. As soon as the IPO happened and even a little bit before, it felt like the place was being groomed down from somewhere unique and innovative to a standardized widget that would be attractive to enterprise-scale buyers like VMware or IBM.

What we are seeing with VC driven "innovation", is only going to get worse when the Linux/BSD founders generation is gone.

I think it's ok to tell this story now. Long long time ago when I was still at DO, I tried to buy HashiCorp. Well, I use "tried to buy" very loosely. It was when we were both pretty small startups, Joonas our Dir. Eng at the time was really into their tooling, thought it was very good plus Armon and Mitch are fantastic engineers. So I flew out from NYC to SF to meet with them "to talk". Well, I had no idea how to go about trying to buy a company and they didn't really seem that interested in joining us, so we stood around a grocery store parking lot shuffling our feet talking about how great Mitch and Armon are at building stuff and then I flew home. I think that's about as loosely as it gets when it comes to buying a company. Probably would have been a cool combo tho, who knows. Either way, they're great guys, super proud of them. <3

I was in a similar position in a company that _might_ have been able to make a good enough offer, but never could convince the brass how amazing a company it is and I never got any traction.

Disappointing to hear about this, Hashicorp was an amazing company. C’est la vie…


When I got back to NYC I said to my boss (our CEO) "we should probably buy HashiCorp" and he said "Yeah, probably" and then we never spoke of it again. We both knew the problem, even if we could have got it together to make an offer and had they been interested, we were growing considerably too quickly to integrate another business. It was a fun idea, and we had a good time entertaining it, but it wouldn't have worked.

My shopping list during those years was NPM, Deis, Hashi and BitBalloon (now Netlify). These days: I generally think startups should do more M&A!


Hashi never sold me on the integration of their products, which was my primary issue with not selecting them. Each is independently useful, and there is no nudge to combine them for a 1+1=3 feature set.

Kubernetes was the chasm. Owning the computing platform is the core of utilizing Vault and integrating it.

The primary issue was that there was never a "One Click" way to create an environment using Vagarent, Packer, Nomad, Vault, Waypoint, and Boundry for a local developer-to-prod setup. Because of this, everyone built bespoke, and each component was independently debated and selected. They could have standardized a pipeline and allowed new companies to get off the ground quickly. Existing companies could still pick and choose their pieces. On both, you sell support contracts.

I hope they do well at IBM. Their cloud services' strategy is creating a holistic platform. So, there is still a chance Hashi products will get the integration they deserve.


FWIW, "HashiStack" was a much discussed, much promised, but never delivered thing. I think the way HashiCorp siloed their products into mini-fiefdoms (see interactions between the Vault and Terraform teams over the Terraform Vault provider) prevented a lot of cross-product integration, which is ironic for how "anti-silo" their go to market is.

There's probably an alternate reality where something like HashiStack became this generation's vSphere, and HashiCorp stayed independent and profitable.


I was an extremely early user and owner of a very large-scale Vault deployment on Kubernetes. Worked with a few of their sales engineers closely on it - was always told early on that although they supported vault on kubernetes via a helm chart, they did not recommend using it on anything but EC2 instances (because of "security" which never really made sense their reasoning). During every meeting and conference I'd ask about Kubernetes support, gave many suggestions, feedback, showed the problems we encountered - don't know if the rep was blowing smoke up my ass but a few times he told me that we were doing things they hadn't thought of yet.

Fast forward several years, I saw a little while ago that they don't recommend the only method of vault running on EC2, fully support kubernetes, and I saw several of my ideas/feedback listed almost verbatim in the documentation I saw (note, I am not accusing them of plagiarism - these were very obvious complaints that I'm sure I wasn't the only one raising after a while).

It always surprised me how these conversations went. "Well we don't really recommend kubernetes so we won't support (feature)."

Me: "Well the majority of your customers will want to use it this way, so....."

Just was a very frustrating process, and a frustrating product - I love what it does, but there are an unbelievable amount of footguns laden in the enterprise version, not to mention it has a way of worming itself irrevocably into your infrastructure, and due to extremely weird/obfuscated pricing models I'm fairly certain people are waking up to surprise bills nowadays. They also rug pulled some OSS features, particularly MFA login, which kind of pissed me off. The product (in my view) is pretty much worthless to a company without that.


They probably don't want their customers to use a competitor's product instead of Nomad.

> was always told early on that although they supported vault on kubernetes via a helm chart, they did not recommend using it on anything but EC2 instances (because of "security" which never really made sense their reasoning).

The reasoning is basically that there are some security and isolation guarantees you don't get in Kubernetes that you do get on bare metal or (to a somewhat lesser extent) in VMs.

In particular for Kubernetes, Vault wants to run as a non-root user and set the IPC_LOCK capability when it starts to prevent its memory from being swapped to disk. While in Docker you can directly enable this by adding capabilities when you launch the container, Kubernetes has an issue because of the way it handles non-root container users specified in a pod manifest, detailed in a (long-dormant) KEP: https://github.com/kubernetes/enhancements/blob/master/keps/... (tl;dr: Kubernetes runs the container process as root, with the specified capabilities added, but then switches it to the non-root UID, which causes the explicitly-added capabilities to be dropped).

You can work around this by rebuilding the container and setting the capability directly on the binary, but the upstream build of the binary and the one in the container image don't come with that set (because the user should set it at runtime if running the container image directly, and the systemd unit sets it via systemd if running as a systemd service, so there's no need to do that except for working around Kubernetes' ambient-capability issue).

> It always surprised me how these conversations went. "Well we don't really recommend kubernetes so we won't support (feature)."

> Me: "Well the majority of your customers will want to use it this way, so....."

Ha, I had a similar conversation internally in the early days of Boundary. Something like "Hey, if I run Boundary in Kubernetes, X won't work because Y." And the initial response was "Why would you want to run Boundary in Kubernetes?" The Boundary team came around pretty quick though, and Kubernetes ended up being one of the flagship use cases for it.


Thanks for the detailed explanation - some of what you say sounds familiar, but this was nearly 5 years ago so my fuzzy recollection of their reasoning - I recall it being something like they didn't trust etcd being compromised on kubernetes. My counterargument to that internally was "if your etcd cluster is compromised by a threat actor you have way bigger problems to worry about than secrets"

My vague recollection is that that concern was that the etcd store (specifically the keys pertaining to the Vault pod spec) could be modified in some way that would compromise the security of the encrypted Vault store when a Vault pod was restarted. It's been a long time since I remember that being a live concern though, so I've mostly recycled those neurons...

(I have no idea what your infra is so don’t take this as prescriptive)

My feeling is that for the average company operating in a (single) cloud, there’s no reason to use vault when you can just used AWS Secret Manager or the equivalent in azure or GCE and not have to worry about fucking Etcd quorums and so forth. Just make simple api calls with the IAM creds you already have.


Caveat: the HCP hosted vault is reasonably priced and works well.

However, strong agree on using your home cloud's service.

We used Vault with Heroku and were happy.


> Caveat: the HCP hosted vault is reasonably priced and works well.

HCP hosted Vault starts at ~$1200/month, you'd have to use a metric shit ton of secrets in AWS or GCP to come close to that amount. Yes Vault does more than just secrets, but claiming anything HC sells as reasonably priced is a reach.


Ah, they have changed the public pricing page. Maybe we were on a grandfathered in deal. They had a starter package between free and enterprise with configurable cluster options that was $60ish a month. We heavily used the policies, certs and organization features that made it a no brainer for that price point for things outside AWS like Heroku.

We were running about $12/mo in aws secrets with no caching and no usage outside our aws services. I taught the team how to cache the secrets in the lambda function and it dropped to a buck a month or less.

If they killed off the starter package then you are right, there are only outrageous options and HCP would not be worth considering for small orgs.


This^ Unless you're a hybrid/multiple cloud environment, there's no much point in using Vault.

ime that’s a way better product to use for secrets management unless you’re trying to do very advanced CA stuff.

We really need a 2.0 version that actually delivers the promise these tools never reached because legacy decisions

Community fork https://opentofu.org/

Indeed. Owned by The Linux Foundation, so this will remain OSS forever/no rug pulls are possible.

Tomorrow's title:

"Linux Foundation joins IBM to accelerate the mission of multi-cloud automation and bring the products to a broader audience of users and customers." ;)


I wouldn't bet on that, some Linux Foundation hosted projects like Zephyr, not only don't have anything to do with Linux, they are under licenses that are quite business friendly as well.

So yeah, one can always fork the last available version, if it then survives to the extent that actually matters beyond hobby coding is seldom the case.

How many Open Solaris forks are actually relevant outside the company that owns those forks?

Also IBM, Microsoft, Oracle,.... and others that HN loves to hate are already members.


Back in 2015 I discovered a security issue with some Dell software[1]. I remember vividly getting an email about a job opportunity based entirely on this from a company with a strange name, that after some googling made a thing called Vagrant. They seemed super nice but I was far too young and immature to properly evaluate the opportunity, so after a few emails I ghosted them out of fear of the unknown. In 2015 they had 50 employees and had just raised a 10 million series A[2].

Regardless of various things that have happened, or things that could have been, the company has pushed the envelope with some absolute bangers and we are all better for it, directly or indirectly.

Regardless of what the general opinion is of Hashicorp’s future post-IBM, they made an impact and that should be celebrated, not decried or sorrowed over for lack of a perceived picture perfect ending.

Such is life.

1. https://tomforb.es/blog/dell-system-detect-rce-vulnerability...

2. https://www.hashicorp.com/about/origin-story


I guess you're not active in hacker news around 2013 because vagrant was absolutely popular here a long time ago. Mitchell Hashimoto showed up a lot too when we're talking about vagrant back then. If only you had procrastinated more you might ended up as employee #51 :)

Official: https://newsroom.ibm.com/2024-04-24-IBM-to-Acquire-HashiCorp...

Confirming what everybody knows, IBM views HashiCorp's products as Terraform, Vault, and some other shit.



But what about the dozens of us using Nomad and Vagrant?

"Additional products – Boundary for secure remote access; Consul for service-based networking; Nomad for workload orchestration; Packer for building and managing images as code; and Waypoint internal developer platform" - Vagrant doesn't even get a mention...

I expected this when the terraform license changed. Not IBM specifically but it was obvious they weren't interested/ able to continue with their founding vision.

Hashicorp had a $14 billion IPO in Dec 2021 and was trading at ~$4.7 billion right before the acquisition announcement. At that point it doesn't matter what the company or its founders want or what their long term vision is. Shareholders are in charge and heads are going to roll if the price doesn't get back up quick by any means necessary.

Yet another example of why I think it's a mistake to take your company public. If I put in the work to build up a successful business, no way would I ever let it be turned into a machine that ignores long term health for the sake of making the stock price go up.

You have no idea what decisions you'll make if you ever were to get that successful.

I'm sure you've broken many promises to your younger self.


If companies didn’t go public regular people would not be able to invest in innovation. As much as people hate it, public markets democratize access to investments

True but no company has a vested interest in the democratization of investment. IPOs are purely about getting paydays for founders.

*and early investors. Mostly early investors in many cases.

crux of the problem is the SV model is completely broken and leads to these cycles. wish it were more about sustainable progression and not rapid half-baked innovation to achieve paydays for greedy founders/investors

Huh, they won't get pay day if no one use their products. And there are plenty of examples of failed products. If people have idea and execution capability for sustainable progression they can very well try outside the valley. It is not like companies don't start outside valley.

which is why the majority of the startups fail and then a lucky unicorn comes and funds the next cycle. look at how many poor ideas got massive investment on the bet of payout; so many blockchain companies and none solved a real world problem. lots of potential investment in things that could have greatly helped many more people in the world, but instead invested into a technology looking for a problem.

I agree that the vast majority of the blockchain companies were "technology looking for a problem," (or at least, technology looking for another problem besides money ledger) but blockchain really was (is) a pretty damn good technology. The most unfortunate part of it is that the only thing it may really stick for is DRM :-(

I guess it's not possible to fuel "hypergrowth" this way, but why not just issue debt? Let the market buy in to your growth with a healthy dividend and reduced risk.

This is misguided and myopic. There are many valid reasons for a company to go public besides “pay day to founders”, here are a few

1. Easier access to capital markets and liquidity in general 2. Marketing/publicity provided by equity research coverage 3. Legitimacy / transparency and trust building for customers (public filings, outsides can gauge health of business) 4. Thanks to number 3, companies have an easier time getting larger corporations as clients or partners

Just because you don’t understand something doesn’t make it bad


Yeah, they sure innovated with all that public money they got over...three years? What did the release in the last three years exactly?

Also, what "democratic access" did people get? The ability to buy at $80 a share and then eventually sell it at $30?

Does anyone really believe this kind of stuff anymore?


What is there to believe?

Capitalism is not a religion, there is no belief involved.


I'm referring to the parent comment, but pithy reply.

If I put in the work to build a successful business and someone offered me a hundred million dollars for it, I’d have a hundred million dollars.

While I would think that, realistically most of my principles are for sale for a few billion dollars.

If anyone is listening, I'll gladly undercut this guy by a few orders of magnitude

You need to let some head space for the inevitable bargaining, let's act as sensible cartel members and not undercut ourselves in a race to the bottom, capisce?

> Yet another example of why I think it's a mistake to take your company public. If I put in the work to build up a successful business, no way would I ever let it be turned into a machine that ignores long term health for the sake of making the stock price go up.

It's a mistake if you care about the long term health of a company. But... why should you?

Hashicorp had a great run, and contributed a lot of great open source products over the years. Today, their products have large user bases and healthy forks seem likely. The founders and early employees cash out, and it's a win for everybody involved.

Nothing lasts forever.


if you go thru VC you are expected to go public to generate returns for the early investors. its baked in

My fear of missing out by not using any HashiCorp product is officially over

Certainly an interesting turn of events. I really enjoy using Terraform (and Terraform cloud) for work but the license changes made me cautious to integrate anymore.

What was the licensing changes? I see a lot of references to it as though it was common knowledge, but I'm not aware of them.

Edit: found something: https://www.hashicorp.com/blog/hashicorp-adopts-business-sou...


Nobody else is bow allowed to make a public offering of a terraform-using product. That is, you can not provide terraform as a service. Gitlab, Azure DevOps, etc all have to move to something else as they can not provide terraform builders without a special license.

This was a major blow to the participating open source community. The license bow used is also vague and untested.


Also you should know that while the terraform language is okay (albeit a little too dogmatic in a functional programming sense for my tastes), the terraform cloud product (runners for terraform executions) is pretty terrible, slow, and overpriced, snatching defeat from the jaws of victory based on the terraform language.

This encouraged at least 4 companies to launch terraform-cloud-like products, and rather than compete and provide better service, Hashicorp responded by saying "take it or leave it, internet!" and they closed the open-source license on the interpreter (BUSL)... At my previous company we were driven away from terraform cloud and into the arms of env0 ... when it often takes 10 minutes for an execution to begin and you have no other executions in progress you realize that the terraform cloud SAAS product is just a total joke...


Totally agree. I had to switch to Scalr. I’m now paying more than I did with Terraform Cloud, but I’m happier and finally have all the features I needed.

Those who take this to the next level by offering Enterprise like features such as change window and approval gates from Jira/ServiceNow will land whales.


Yeah, they went from a more permissive license (Mozilla MPL) to a less permissive one (BUSL) but I can kind of understand why. I can also understand why the OSS community is upset, and after Hashicorp went after OpenTOFU recently, I'm siding more with the OSS community here.

Before the license change, another project (Pulumi) built something that was basically a thin wrapper on Terraform and some convenient functionality. They claim they tried to submit PRs upstream. Hashicorp loudly complained about organizations that were using their source without making contributions back when they changed to BUSL. I wasn't close enough to be aware of details there, but maybe there were other groups (I can think of Terragrunt, too, but I'm not sure they're included in the parties Hashicorp was complaining about. Terragrunt did side with OpenTOFU after the license change, though). This also means cloud providers can't stand up their own Terraform cloud service product as it could interfere with the BUSL license.

When the license was updated to BUSL, several contributors forked the last MPL-licensed version into OpenTF, then renamed to OpenTOFU. Some say that Hashicorp should have gone full closed-source to own their decision. I think they knew they were benefitting greatly from several large corporations' contributions for provider-specific configuration templates and types.

Then, earlier this month (two weeks ago?) Hashicorp brought a case against OpenTOFU claiming they have stolen code from the BUSL-licensed version, with OpenTOFU outright denying the claim. We'll see how that shakes out, but it shows that Hashicorp wasn't merely concerned about copyright & business/naming concerns (a big part of why other BUSL-licensed projects chose the license). I don't know if the upcoming M&A had anything to do with their license decision but I kind of doubt it? Maybe others here have more context or are more familiar with matters than I am.


It’s been widely speculated, months ago when the change happened, that Terraform has become the scapegoat for this licensing change. The actual impetus was IBM reselling Vault. IBM then helped push the OSS fork of Vault (OpenBao) and this acquisition just brings this whole license change thing to a convenient conclusion for IBM.

Almost all the talk I saw internally, from well before to well after the license change, about competitors "taking advantage" of our open-source versions was about TFC competitors like Spacelift, Scalr, etc. and Terraform OSS. The Vault competitor mentioned most often was Akeyless but for reasons less like the TFC competition. I saw IBM Cloud Secrets Manager mentioned maybe once or twice.

I'm sure IBM Cloud's Vault offering was part of the decision, but from where I was sitting, it didn't look like the reason or even the primary reason.


It's interesting Akeyless is mentioned most often as a vault competitor. Why is that?

Well folks are already migrating from Terraform to OpenTofu. I am sure similar open source projects for other HashiCorp's products unencumbered with IBM business model will be out pretty soon.

So all in all I think another big win for open source even if little indirectly.


> By joining IBM, HashiCorp products can be made available to a much larger audience, enabling us to serve many more users and customers.

I'm really wondering who is kidding who here. Is it IBM or Hashi?


IBM has its mitts into finance, defence, aerospace -- and these industries generally stick to IBM / IBM sanctioned products. So with IBM selling Vault / Boundary (in particular) they will get better adoption.

In my experience IBM uses the sexy stuff (used to be OpenShift) to get meetings then sells the same old boring IBM software and services after the initial meetings.

Dude some 28 year old marketing rep wrote that copy, don’t take it seriously

It's a shame that HashiCorp gave up. The govt bans foreign competition like Tiktok and in house competition don't have the stamina. Doesn't bode well for capitalism.

"Give up" is not really the appropriate terminology, the board of directors are the only ones that really have a say in acquisitions, and if the offer was given with a sufficient premium their own choice is limited by willingness to face shareholder lawsuits if they turn it down.

> IBM will pay $35 per share for HashiCorp, a 42.6% premium to Monday's closing price

Is that an insane premium or what?


I think typical premium is about 20% for acquisitions.

The amount may have been negotiated prior to this month's downturn, which Hashicorp was hit pretty hard by (they had about a 10% fall based on what I'm seeing).


Yea, I think it often depends on where a company's stock has moved recently. IBM's offer is still below HashiCorp's 52-week high. That means there's probably a lot of current investors who likely wouldn't approve a deal at a 20% premium. If your stock is near its 52-week high, then a 20% premium looks a lot more reasonable.

April-August of last year, HashiCorp was regularly above a 20% premium over Monday's close. Many investors might think it would get back there without a merger - and it had been higher. IBM is offering $35/share which is close to the $36.39 52-week high. In some cases, investors are delusional and just bought in at the peak. In other cases, a company's shares have been under-valued and the company shouldn't sell itself cheaply.

I don't think one can really have a fixed percent premium for acquisitions because it really depends. Is their stock trading at a bargain price right now? Maybe people who believe in the stock own a lot of the company and don't have more capital to buy shares at the price they consider to be a bargain - but would vote against selling at that bargain price even if they can't buy more. They're confident other investors will come around. An acquiring company wants to make an offer they think will be accepted by the majority of investors, but also doesn't want to pay more than it has to. If the stock has been down and investors think it's a sinking ship, they don't have to offer much of a premium. If the stock is up a ton and investors sense a bubble, maybe they don't have to offer much of a premium. If the stock has been battered, but a lot of shareholders believe in it, then they might need to offer more of a premium.


Analyst consensus I've seen on long-term price has been floating around $32-34 per share. Take that with as much salt as you think it needs but it's at least interesting that it's within shouting distance of (but not over) the IBM offer.

It was a 63% premium when IBM bought Red Hat. Sadly I'd sold my RSUs about 2 days before :-(

Same, but with ESPP stock, and it was a few months earlier. Ouch.

I still voted no.

Yeah, congrats to the people who held the stock yesterday!

So, will they now add JCL extensions to HCL? Will they be pulling TCL into the fold be the next plan?

They'll do what they should have done years ago: give up on all this fuddy duddy syntax and just go with XML. ;-)

Was this supposed to make my eye twitch? Don’t give them any ideas.


Oh God, of course they do.

The good news is, you no longer need Dhall or some crazy scripts to generate your Terraform files. Just a bit more XML and an XSLT stylesheet oughta do it!

Although I think they have very different use cases this means IBM own both Ansible and Terraform, both claiming to be IaC

Although there is significant overlap between the two, I prefer Terraform for resource provisioning and Ansible for resource configuration.

Same, but now IBM will be able to merge them to create Terrible (or Ansiform). ;)

I like the joke. But a better integration between terraform and ansible for config would be pretty neat.

How would you imagine that working? I think a lot of people would love that, but I have seen very little specific so far.

Same. I view them like peanut butter and jelly. Terraform is my preference for new stuff and everything that isn't a stateful VM, and Ansible is my preference for managing manually created resources (which I try very hard to avoid, but always end up with some) and for managing VMs (even VMs created by Terraform). For stateful services (like a database cluster) Ansible is so much better it's not even a question, and for cloud resources (s3 buckets, managed databases, etc) terraform is a much better approach. I've never felt the two were really competitors even though there is some gray-area where they overlap.

Soon to be built into Ansible Automation Platform. Should only cost $100 per managed resource.

Is the implication that we won’t be able to freely use ansible-playbook anymore, and/or development will end on the “freely” available one?

No, the implication is that Terraform will become prohibitively expensive to use. AAP has been around for a while, as Red Hat’s downstream of (iirc) AWX. It’s also quite pricey, like Terraform may become.

Thank you

It's really sad to me that Hashicorp never found a monetization model that worked.

100% of the companies I worked for over the last 6 years all used Terraform, there really wasn't anything else out there, and though there were complaints, it generally worked.

It really provided a lot of value to us, and we definitely would have been willing to pay.

Though every time we asked, we wanted commitment to update the AWS/GCP providers in a timely fashion for new features, and they would never commit and tried to shove some hosted terraform service down our throats, which we would never agree to anyway due to IP/security concerns.


Perhaps an open source fork of Terraform, where the cloud providers themselves maintain the provider repos, is the correct end-state. AWS started doing that in the last few years, assigning engineering resources to the open source TF provider repos.

That way, the profit beneficiaries bear the brunt of the development/maintenance costs.


Thereby really putting the Corp into HashiCorp.

I wonder how this will work with Red Hat. Traditionally, Red Hat and HashiCorp competed more directly than other IBM portfolio products, fighting over the same customer dollars.

Number one rule of megacorp M&A: Juice quarterly numbers first, ask capital allocation efficiency questions never.

terraform changed to business source license pretty recently too: https://www.hashicorp.com/blog/hashicorp-adopts-business-sou...

> terraform changed to business source license pretty recently

Now we know why!


I suspect you have the causality on this backwards: https://news.ycombinator.com/item?id=38579175

Wow, I read that thread with great interest at the time, and reading it now knowing about the acquisition is quite the mind blowing experience.

I worked at a startup that got acquired by a big company and we switched our custom proprietary license back to Apache 2 after acquisition. The reason we switched in the first place was because it's what we thought was best when we were out on our own. Being owned by a hardware company, you can have the software for free. (We still sell licenses and have a cute little license key validator, though.)

IBM will gut everything to the bone and send most of the jobs to India.

There will be nothing worth of using pretty soon as we will all move to the next big foss thing.


There is plenty of money to milk from existing customers using Vault. For everyone else, yes - time to move on.


I've spent last 3 days learning nomad for my homelab setup, hope things stay more or less the same for it :)

Nomad will indeed stay the same if all future development ceases.

Nomad has a remarkably strong community for it's size. I'm almost positive it will continue to live in some format, even if completely hard-forked.

I know if nobody else does anything I will do something myself, personally.

I love Kubernetes, however I feel like things like Nomad and Mesos have a space to exist in as well. Nomad especially holds a special place in my tech-heart. :)


> Nomad especially holds a special place in my tech-heart.

Same. I'm not a fan of the recent licensing changes and probably won't use it for any new installations, but Nomad enabled me to be an entire ops team AND do all my other startupy engineer duties as well with minimal babysitting. It really just works, and works fantastically for what it is. Nomad is like the perfect mix of functional and easy to manage.


The question is what to replace it with?

There doesn't seem to be enough forces to create a MPL fork but at the same time we have a gap between "Docker Compose is enough" and running Kubernetes. Because there are many situations where going Kubernetes (or even lighter k0s, k3s type setups) does not make any sense.

My guess is no organisation which can afford to dedicate resources to contribute or create a fork need Nomad. So we end up with a big gap in the ecosystem.


Right, it's unfortunate. Maybe IBM will open the licensing back up and pour some resources into Nomad? I doubt it, though.

terraform changed to business source license pretty recently too: https://www.hashicorp.com/blog/hashicorp-adopts-business-sou...

When they did this the community forked it into https://opentofu.org/

In what sense did they side with OpenTofu? Genuinely curious.

I think you meant to reply to this one:

https://news.ycombinator.com/item?id=40149230


Considering IBM sided with the fork, I suspect it'll be reverted for most or all of Hashicorp's projects.

I bet they'll organize it under Red Hat, and Red Hat will apply their open source policy to it, and that will involve reverting to OSI approved licenses

That doesn’t seem like what’s happening from first appearances. Looks like it’ll remain separate for now which means no RH influence to fix the licensing boondoggle.

Red Hat is a shell of itself. There is no appetite for taking on Terraform when Ansible is their ugly baby.

I've found they are complements to each other. One provisions infra, the other customized that infra for your needs.

But I could be totally off-base.


Yes. I did this a while back: https://github.com/radekg/terraform-provisioner-ansible. That received some contributions from IBM. Unfortunately, HC never wanted to maintain it, then in 0.15 they replaced provisioners with providers or plugins (can’t remember anymore). I had a couple of discussions with their OSS head for TF at the time but the bottom line from them was „why don’t you rewrite it in your spare time”. The problem was their replacement didn’t give access to the underlying communicator (your SSH or winrm). So I hung the towel.

You’re right, I was off base with my comment. They are indeed complementary.

I mean, they bought Red Hat, and killed CentOS; I can say after 25 years in enterprise IT, I have zero trust in IBM to keep any open source licensing "open".

IBM didn't kill CentOS.

They where under IBM ownership at the time, so IBM did kill it. The software now branded as CentOS is basically Fedora, which is fine for desktops, but never felt good on servers. CentOS was perfect for a lot of us SysAdmins back in the day to use on our own servers etc, while using Red Hat at work. We also used it for anything PoC or servers that did not require support. These days licensing is easier using models like AWS Subscriptions, but we used to buy licenses in bulk, and if there where not enough licenses, we had to do the whole procurement dance.

Side note, in the 12 years that I used Red Hat at work, we used the support 2 times, and both times they forwarded some articles that we had already found and implemented. However, enterprise always demands some support contract behind critical systems to blame in case of disaster.

Honestly, who knows what would have happened if Red Hat was left as an independent entity, but we do know for sure that they did make the changes after the acquisition.


I work at Red Hat. IBM was not involved in the decision to kill CentOS.

>The software now branded as CentOS is basically Fedora

CentOS Stream (what replaced CentOS) is vastly more similar to CentOS than Fedora.

It's CentOS with rolling patches instead of bundling those same patches into minor releases every 6 months. Only the release model is different from RHEL / CentOS, otherwise it's built the same and holds to the same policies in terms of testing, how updates are handled and compatibility.

Fedora on the other hand is very, very different. Packages are built with different flags, different defaults (e.g. filesystems), very different package versions, a different package update policy (even within one major release Fedora is much more aggressive than CentOS Stream / RHEL / CentOS), etc.

I understand that not having an near-exact replica of RHEL supported for 10 years was very convenient and the way the EOL was announced, and the timelines, sucked massively. But CentOS Stream is suitable for a large number of the use cases where CentOS was used previously, it is not "basically Fedora". It's more like 98% RHEL-like wheras Fedora is doing something else entirely.


I also should have mentioned that the CentOS Stream lifecycle is 5 years whereas Fedora's is 13 months

5 years is less than 10, but it's a lot less different than 10 vs 1


That is the kind of thing that could have been a kind of negotiation tactic for purchasing Hashicorp, not necessarily done in good faith.

Hashicorp's relicensing could also have been a tactic to get the sale to happen.

IBM didn't just fork Vault to make a statement -- IBM Cloud Secrets Manager was (openly) built directly on Vault OSS.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: