What is your website deployment process like?

What is your website deployment process like?

For me, I initialized an upstream git repository on my remote server with a git hook that runs a script on post-receive that kills/stops all running docker containers and starts a new one, booting up the newly pulled code.

Please no static site faggots in this thread

Attached: download.png (260x194, 9K)

Static site on S3.
Problem, dynamicfags?

For my personal projects I push to the master branch of PROJECT's repo.

I have a Common Lisp service that is hit by Git's webhook. Builds the new Docker image, distributes it to my RPI Docker Swarm, updates the Docker Swarm service.

That's it. All load balanced, Let's Encrypt niceness. So easy.

I'm a semi-static site faggot.
All my html pages get rsync'd to the server, and any webapps get their new binaries pushed automatically and restarted.

>Please no static site faggots in this thread
This is the only proper type of website faggot.

A bunch of PHP files on my Debian / Apache server. I edit the scripts online.

I'm working on new deployment/hosting infrastructure for my work.

It goes something like:
Git push
Jenkins
- Build (Docker)
- Deploy
Helm
- Deploy to kubernetes (Docker image)
Kubernetes
- Load balance
- Services
Etc....

There's more to it obviously but cbf typing it out.

You can't serve proper 404s or redirects with a purely static site. Hell, you can't handle most of the HTTP protocol with a purely static site.

gcloud app deploy

management jumped in to cloud meme and dotnet core

>git repo
>a cloud build agent triggers in every commit
>build agent in the cloud pulls repo
>restores all dependecies and builds solution
>build agent publishes to testing and staging server
>tester approves the shit and switches staging to deployment

similar to you, hooks on push for dev and release for production.

I have multiple identical connected microservices running on the same instance, across multiple instances.

after automated testing, services get shut down and restarted randomly-ish over 12 hours.

the microservices are supposed to update themselves instantly when they recognize a signed update in a swarm, but they are restarted anyways just to make sure.

on dev the period is 10 minutes.

I press the green publish button.

Attached: godaddy.png (287x49, 3K)

The web server can return the 404s and a static site has no need to use most of the protocol. What's the problem?

You guys have any good introduction videos or text explaining the current methodology? I tend to just use CMSs at the moment, and I figure I should update my understanding of this stuff.

i make shit, test it locally, if it works it goes upstream and then i manually deploy it. literally don't need any kind of redundancy, kubes, fucking docker shit for basic shit.

redpill me on docker swarm vs kuberneets

>install linux
>install apache
>install mysql
>install php
>install composer
>install adminer
>mkdir /opt/framework/dev, /opt/framework/prod, /opt/framework/archive
>do periodic copys into archive
>composer install framework
>Use apache to determine dev or prod

I'm stuck in the past, these new tools scare and bore me.

github pages and refresh after a few minutes

>install php

Attached: Police1.jpg (500x398, 20K)

log in via ssh
git pull

Is there a better way than this? How do I automate?

CI/CD using gitlab/bitbucket+circleci

Theres a docker container called watchtower that will always pull the latest revision of a container. Seems like itd be easier for you OP

>docker

Attached: 1526232524405.png (380x349, 77K)

> private gitlab on self hosted server
> build with gitlab runner (Hugo/Jenkins)
> rsync to apache host and verify
> reload apache

I used to deploy on docker but it is a pain to configure and apache proxy is really easy to configure for subdomains.

I work on it localy and once new features are tested i add it to stable branch and then manualy on server. No need for compliczted scripts hooks and other nonsense.

where are your tests fagot? where is the selective rollout process?
you will get fucked big time one day with all this direct to production deployment
I do this
dev>staging>beta>prod
audits and health checks are done at each stage. It takes hours for commits to reach production. Anything strange stops the flow entirely

Attached: 1526525621564.jpg (456x402, 30K)

what's her name

I wanna see a big black shlong in her tiny asshole

Also know as a FAD-stack.

Depends on the language and server setup, but you could set up a git-hook so that you just need to push to the repo.

nvm, it was unsatisfying. now I have an unresolved boner
anyways, I doubt anyone here is in charge of designing a corporate lifecycle. point being: you can probably move from dev right into production after automated tests are done for all intents and purposes.

You can do that with the process OP describes.

plz post name

yus yuu can sillyhead uwu

elsa
JEANny QUIT LIVIN ON DREAMS

Attached: tumblr_nnkxsh8zF51sfiug7o1_500.gif (480x270, 1.65M)

Who are you, shoeonhead?

ITT: fagots with a

my product has $0 market cap according to my own projections and I'm operating at a consistent loss

> What is your website deployment process like?
We have CI.
I deploy an LXC template with a basic CMS template, register a gitlab-runner, then some programmer pushes changes.
Fuck Docker, it's like shitting from another person's digestive tract.

>What is your website deployment process like?
You mean application.

It goes like this:
1. Gated checkins (tests + code review) get built automatically.
2. Main dev branch gets tagged at chosen commits, these build complete packages. One package goes to every environment, no modifications allowed.
3. Fully automated deployment, but manual pushes based on the backend state.
OR for PROD
3. We hand the package and some scripts with doco to a vendor and more often than not, it all works.

Quick note on step 2, the environment specific files used to be maintained outside of source control and added to a "base" package which came out of dev.

I also fought for and evenutally got:
1. Feature flags instead of a completely cherry picked "release" branch.
2. One deployment a month minimum. PROD was 8 months behind dev at one point.

Unfortunately managment is fucking us again by attempting going back on both points. Still fighting the good fight for sane development practices.

web? which frameworks do you use?

I create a tarball in my dev laptop and I copy the tarball to the server using scp and I run it. I'm the only employee of my company.
Am I a brainlet?

Use dropbox as a repo and copy using filezilla to our production servers(4 of them)

nice meme

> I create a tarball in my dev laptop
Do you use any CVS, though?