I'm impressed how far GitLab has come in a short amount of time. It's a pretty well-designed platform, when it comes down to it. I think the Vue-based user interface and the data model are probably the cornerstones of it, and using Ruby on the backend just helps them iterate quickly. Ruby also seems to be getting more robust [1]. Their CI/CD platform is really a great approach, emphasizing the GitLab Worker as a portable CI runner that communicates through the API. For resource-intensive apps it helps to not have the number and size of CI workers be limited by a SaaS plan like they tend to be with other CI/CD platforms.
One of my favorite features from GitLab is, without having to install any other tool, I can get a link from the response to create my MR. Such a simple feature, yet so convenient.
If you push a new branch to a GitLab project, we return a URL to immediately create a merge requests, which appears in the Git output in your terminal. So rather than going to GitLab, then clicking on a button, you just click the link in your shell.
I recently crossed swords with .gitlab-ci.yml. An obfuscation layer that fights back.
Ditto jenkinsfile and whatever you wanna call GoCD's config.xml.
This silliness reminds me most of all those ETL workflow obfuscation frameworks, like BizTalk and TalenD.
PS- If anyone has the bad judgement to use YAML, it'd be nice if the linter reported something more specific than "syntax error line 1 char 1" for deeply buried typos. I thought XML sucked until I met JSON. I thought JSON sucked until I met YAML. What's next? The resurrection of ASN.1?
PS2- It's not programming if you can't set a break point. The edit, eval, fail, profanity loop for troubleshooting .gitlab-ci.yml is pretty intense.
But how do you tell a spell script to be a multi staged build? You can't just write `set OS=macos` and suddenly your OS changes from debian to macos. The best you could do is to ssh into that machine but how do you tell your build system to provision a machine with such parameters? You would essential have to build a new build system just for that. Or you could just use a config file -- whether you call it config.ini, .gitlab-ci.yaml, or build.json should not matter. Furthermore, how would you execute a bash script on Windows? You would have to at least tell it to enable Linux Subsystem and make sure it is a Pro version.
I personally don't like to have a whole shell script inside .gitlab-ci.yaml file either -- that's why I have a seperate script called setup.bash/build.bash inside a CI folder and juse write `script: ./ci/build.bash` inside gitlab-ci.yaml. But nobody ever told you, you cannot do that. The gitlab-ci.yml is just a configuration file for the build system to tell it what you want your environment(s) to be.
I agree that error reporting for YAML files suck -- but at least they are more consistent and easier to parse then XML files. And they are also way more readable then JSON files -- they have essential features such as long strings, and comments. The only annoying thing with Gitlab CI (and any other CI system) is to make sure it works. It often takes me 10 tries to make sure everything works correctly.
How would you simplify testing on multiple systems/oses? Or have one project written be tested for multiple versions of that language (because you need backwards compatibility)? It's nice if your build system is as simple as
make build test publish
and only need to have it work on a Linux distribution of your choice, but some many projects, that's just not enough. Your .gitlab-ci file is just a file to tell your build system: for debian-python3.{3,4,5,6} execute that, for windows: do that. Btw: debian-python:devel is allowed to fail because it is not stable. It's just nice to have an idea if it might work for further versions.
I can't comment directly, obviously. All I have is my faith (irrational belief) that we can reengineer our workflows to ease our own suffering.
I used to create prepress (print manufacturing) software for Mac (old and new) & Windows. Multi platform, multi executables, multi language, multi target builds. It'd pull all the resource bundles (L10N/I18N), recreate the the screen shots, rebuild the PDFs (manuals, marcom). The QA box farm would rerun all the regressions (combo of VirtualPC and VMware boxes) on multiple versions of multiple OSes (eg Japanese Windows) with all the necessary drivers (eg security dongles, which totally sucked). We'd end up with CD images and downloadable zips on our private website for VARs and high end customers.
Seemed to work pretty good.
I used to do a lot of electronic medical records stuff. My team (of 4 core devs) supported five regional exchanges (the first to market), 80 hospitals plus all their partners, pharma scripts, labs (internal and external), feeds to the govt (eg CDC, SSI). Basically 100s of data feeds of every mutant format and protocol imaginable. At the time, all of the data had to be stored on each customer's servers. Meaning we had to deploy and support in situ. Meaning firewalls, reverse SSH, forklifting files into place, whatever was permitted. We did multiple builds per week. Most deploys were push. Some we had to setup as pulls. We implemented auto updates, but never had the QA/Test resources to achieve the necessary confidence, which was a shame.
Seemed to work pretty good.
Sure, today's cloud deploys are complicated in other ways. But the more stuff changes, the more it all looks the same.
PS- Rereading your comment... Maybe consider changing some of the steps from push to pull. I was super inspired by the architecture of postfix (email server). Trying to mimick it simplified a lot of my own efforts.
You still need something to describe the order and dependencies of those shell scripts as well as when to run them (branches, tags) and how to handle the artifacts (expiry time, etc). Which should be, much preferably, something not Turing-complete, so you can make an UI for this. Draw the graph, etc etc.
Take those parts out, and almost all what's left in .gitlab-ci.yml is just the `script` key.
If your visual programming fuzzy whatzzit drops down to metal (shell commands in this case), just use the metal.
I did get the git-runner to run locally. That was fun. Why do I have to commit my changes to my local branch so the git-runner can see them? MY CODE IS RIGHT THERE!
You'll need to run the CI pipeline to figure out the execution graph. In a restricted language, you don't have to - it can just describe that graph.
> If your visual programming fuzzy whatzzit drops down to metal
I don't get you. Everything programming eventually drops down to metal one way or another. It is not about visual programming, it's about the appropriate languages. Shell scripts are not exactly appropriate to describe a graph. Especially a graph meant to be ran on a distributed system - so, really, *sh may be not the best fit here. Maybe you have some better idea, but I'm just afraid you'll end up with some shell-based DSL that would be no better than YAML, but suffer from all possible issues (for example, there's no concept of pure functions in shell scripting, so every CI run plan can take completely different shape)
Oh. If you're a person who likes all their configuration files being programs - then I won't argue as it'll be a matter of taste.
> Why do I have to commit my changes to my local branch so the git-runner can see them?
Yeah, I'm also disappointed in this. It was dismissed as "wontfix" a while ago https://gitlab.com/gitlab-org/gitlab-runner/issues/1545 and seems that it stays this way even now (they've deprecated `exec` at some point but then reconsidered).
Had to write myself an alias for `git commit --amend && gitlab-runner exec`
I'm a simple cave man. I don't understand why there needs to be an execution graph. Conditional logic?
"Everything programming eventually drops down to metal one way or another."
Keeping with the trog theme...
I need things to be as explicit as possible.
I just couldn't figure out why my script commands were not doing variable substitution. I expected my line ' - echo id: $CI_BUILD_ID' to cut and paste 'echo id: $CI_BUILD_ID' to the command line and run that bad boy as-is. And I'd see the expected output in the log. Just like if I run 'echo id: $CI_BUILD_ID' locally.
What could be more simple?
Nope.
gitlab-ci does some ninja code escape mangling thing (turn on debugging output to see it in action). So in your log you continue to see 'echo id: $CI_BUILD_ID' instead of 'id: 123456'.
I have no idea why anyone would do that.
I want to run commands. Why not just run them as-is? Why the obfuscation layers?
--
Inevitably, abstraction layers introduce an impedance mismatch, where some behavior will surprise you, where you have to do a workaround. Then you're fighting the tool instead of solving your problem directly. Then you say "fuck it" and just have the layer call your loose code (eg shell script) directly. Then what's the point of having the layer at all?
There are some extra rules like deploy and slow tests only running for release tag builds (vX.Y.Z) and lint being allowed to fail for branches with "wip-" prefix in their name.
Lint runs in parallel with build + test (multiple runners, on different machines, each having some concurrency). Different tests run in parallel so you have to wait a little less. Tests only run after build is ready. Release only runs if both lint and tests succeed. Deploy is after the release.
You can write this graph as a shell script, but this is not without downsides:
1. You need a DSL (a bunch of pre-defined shell functions or external commands) that control the individual jobs in a sane manner.
2. You can't just look at the script and draw the pipeline graph (so before the pipeline's ran CI/CD can't tell you what's going to happen next).
3. You can't guarantee that the plan will be the same if pipeline is re-ran again. With shell scripts, program output may change, even though all inputs are the same.
If all you need is to run a simple shell script, with no multi-machine parallelism and no fancy conditional logic - all of the YAML stuff is an overkill you don't need. Although I guess all you need is approximately this:
run-ci:
script:
- ./run-ci.sh
Which doesn't look like any large sacrifice to me.
> gitlab-ci does some ninja code escape mangling thing (turn on debugging output to see it in action). So in your log you continue to see 'echo id: $CI_BUILD_ID' instead of 'id: 123456'.
Not sure I've encountered this much, but yes, this probably makes things complex. I agree that this is can be a pain point.
> I have no idea why anyone would do that.
I'm not sure, but the probable rationale is likely that they didn't want secrets to leak. Variables are the place where you can store various secret information, like CI signing keys or remote server credentials. They probably didn't want to leak what happens with e.g. `curl --user "ci:$CI_DEPLOY_PASSWORD" "$CI_DEPLOY_URL"`.
The term CI/CD suddenly appears everywhere.
Can «continuous integration and continuous delivery» be shortened to CI&D or CIaD? Or how do you pronounce CI/CD? AC/DC backwards?
OP is wrong. Apple had a huge migration (likes tens of thousands) over to GHE recently after they released a migration tool for moving from GitLab. We do still have groups using GL but that number is shrinking.
Our team loves Gitlab. We have had a bunch of slow responses today. Also lots of CI/CD glitches yesterday and today... but we recognize this is just due to the exodus so we know it will go back to normal - we're just laughing about it a bit.
Sorry, but you really aren't, quite yet. I'm trying to import a very small repo from github (literally 1 branch and 2 files with hardly any revisions and no issues) and it's still sitting there showing "Starting...". I had tried to mass import a lot and after 11+ hours of "Starting..." I decided to remove them and just try one at a time. Still no luck after almost 20 minutes.
Thing is it’s not even an exodus. It’s just a few thousand repos. And they aren’t even prepared to handle that. If there actually was a mass migration GitLab would completely fall over. I’m sure that would give folks a lot of confidence in the platform though.
It was really wired day yesterday. I made a few commits, pushed them, my local CI started testing it, all passed, so I opened MR on my branch... and there were no commits and no changes in webui, as soon as commits appeared, they appeared with CI status
This tweet is, unsurprisingly, too light on details. I wonder what the plus for Xcode is here? I'm sure it already had Git integration. Is Xcode getting project, issue, merge-request etc. management support? And they chose to start that with (or exclusively base it on) GitLab?
Maybe they had a slide for GitHub and they deleted it in light of the Microsoft announcement. Apple is one of very few companies secretive enough to be able to do something like that.
I've just imported some of my open source projects from GitHub to GitLab, not because I'm in panic mode due to MS, but because i wanted to try out GitLab again already for some time. I've used it once 2-3 years ago for a short time in a self-hosted version in a client project, and was mostly unhappy with it's perfomance. I thought it was mainly because it was running on very slow hardware, but the gitlab.com version now feels even slower. Is this because they have currently big problems with unexpected increase of users or is this normal?
Gitlab requires a lot of resources. We have a local Openstack cluster and were running GitLab on a small instance. It was a beast. Moved to a larger instance and it works beautifully even over the Internet from another ISP.
Google Code had git support and a fair amount of traction within the open source community at one point but they let it stagnate and eventually put it in read-only mode. I don't think Google actually wants to be in the code hosting business for the community at large.
They have a couple, neither are usable by Cloud customers. I believe the git integration is for Stackdriver Debugger (debug stuff in prod with viewing source), and container builder[0].
As for issue trackers, there is the chromium one that is based off of the old Google code issue tracker [1]. And about a year ago they started exposing a newer issue tracker[2].
We took external investment so we need to either get acquired or IPO. Since 2015 we're aiming for an IPO in 2020 https://about.gitlab.com/strategy/ and so far we're on track.
Companies that IPO generally have a certain amount of annual revenue, a certain sized customer base in particular sectors (e.g. SAAS), and have a certain valuation judged by market capitalization. Companies have to be a certain minimum size and must have a predictable business with stable metrics for public exchanges and markets to be willing to list and purchase shares in the company. This doesn't always require being profitable as of the IPO, but it typically requires a credible plan for achieving profitability (in the eyes of investors).
According to the 2017 IPO Report [1], citing SEC data, the median IPO offering size in 2016 was $95 million, and the median annual revenue was $66.5 million. To pick an example according to Crunchbase, Mulesoft raised $221 million in its 2017 IPO at a valuation of about $4 billion. They had a revenue in 2016 of $188 million with a year-over-year growth rate of 70%. For another example, Dropbox's 2018 IPO valued the company at $11 billion on revenues in 2017 of $1.1 billion.
First off: that is an excellent 101 introduction to the topic, thank you for taking the time to write it.
However I'm already familiar with the topic. I'd just like to hear from Gitlab's CEO what makes him believe he's "on track" to an IPO in 2020. Usually such a display of confidence is backed by some form of evidence. Does Gitlab publish their revenue, or revenue growth, anywhere?
Call me cynical, but the goal of startups is not to become organically profitable. It's something extremely hard to do anyway, especially in the developer space nowadays where everyone expects everything to be fee.
You're misunderstanding this. They added GitLab and Bitbucket in Xcode 10, but Xcode 9 already had GitHub integration. These two are being added in addition to GitHub.
Apple probably planned that long before the announcement of Microsoft's acquisition was made because a lot of customers requested gitlab/bitbucket integration after they added the github integration. But they could only tell that they public after they released xcode 10 at the WWDC yesterday. It was just unfortunate that the WWDC and the announcement were within ~24h.
Apple has no cloud service and no cloud tools to offer, it makes no sense for them to buy GitLab. But, still, it woudln't be a bad thing. Apple is much more open-source company than microsoft, doesn't have decades of history fighting OSS and spreading FUD and stealing code from open source without acknowledging real ownership. I'm sure that would be seen more positively than MS buying GH.
Really? Not sure you've been paying attention. That may have been true in the past and I'm not saying Apple isn't OSS friendly, but the sheer amount of Microsoft code on Github would beg to differ.
[1] https://githubengineering.com/removing-oobgc/