Microservices

Anyone else working with microservices?

Attached: microservices.png (1400x765, 113K)

Other urls found in this thread:

mbtest.org/
twitter.com/SFWRedditGifs

yeah op

You enjoying it? What do you use for inter-service comms? REST or a message bus?

I think REST is better ultimately because there is less managed infrastructure and ending up with half-finished/corrupted transactions, but message bus is better if done properly

Using REST but it is a bit of a pain to remember where each endpoint is, and it all goes fucky if an endpoint is changed but the service calling it isn't..

Half finished stuff is pretty much a given, systems have to be designed with failure in mind.

>tfw docking a microservice in my microkernel to set up a microinstance on my microserver.

Attached: 1361279053612.png (193x247, 93K)

so, deeper question re persistence, what is more valuable to persist? State or changes to state?

I am.

Migrating big Enterprise software into it rn.

We do use a message bus for signaling and small data but most of our big data gets served as a rest endpoint. That way works best for us coz we can integrate into our other e n t e r p r I s e bs.

I am, and it's fucking untestable.
It keeps the interfaces between modules stable because everyone is afraid of what might happen if you change anything, though... That might be a positive.

Also, using almost exclusively RabbitMQ for integration. It's great, but suffers from CAP theorem from time to time.

Sounds like fun. Are you keeping the legacy system going + publishing messages from it?

>fucking untestable

I hear you. I'm trying to work out if this is actually worse or better when using REST throughout..

Having well defined interfaces seems like a route to go down, and sort of happens naturally like you say, but I'm thinking about defining contracts for each microservice, then these can be used to create stubs to simulate other services.

That or just deploy it all and run automated tests (maybe cucumber..) on the whole thing..

Service stubs tend to get ever so slightly outdated, I don't think it would be a great solution. Something like mbtest.org/ might be better, though. I'm yet to test.

Integration through REST brings up another can of worms entirely... How many TCP requests are you doing before answering your client? What if one of them fails? Networking is pretty reliable nowadays, but it will never beat in-process communication. Also, there's just so much more shit to take care of that occasionally fails. Not super frequent, but yet another concern that didn't exist in "less distributed" architectures.

If I had to start a large project nowadays, I'd take this approach: Have at most a handful of reasonably large services only occasionally consistent with each other.
Each of them would have a "main" application bootstrapping together library projects for each of the bounded contexts. Those libraries can evolve independently and have their own repositories, and they can import each other, as long as no cycles are formed.
That way, every release I can just update all dependencies on "main" to their latest versions, run all tests on everything and be confident that the whole thing works together. Then deploy this "large" artifact.

I know nobody asked for it, but that's my two cents

Pretty fun, really. Specially coz I got free pass to burn the old codebase as I see fit.

The legacy shit is running, but we are currently on a big freeze. day 1 next year my system will take over.

Oh, the Big Rewrite in the Sky... Uncle Bob would like a few words with you, son

Attached: uncle-bob-martin-h-1535100693492.jpg (400x368, 17K)

>mbtest.org/
Looks interesting..

A few, depending on the call I think up to 4 or 5.. This is a problem but we persist requests and retry where appropriate so we can restore state after failures, our services call lots of third party apis with state, so this is basically essential.

Libraries work well, but binding to a single artifact seems like it wouldn't scale well or allow single microservices to be individually deployed..

5 network calls plus auth and database... Is that async? Latency might become a real problem.

What about it wouldn't scale well? There's nothing restricting the number of nodes in production, or even preventing every library from using its own separate database.
Also, of course you can deploy individually, at least in the most mature stacks. Dynamic loading of JARs and DLLs has been a thing since forever.
You could argue that you can't have independent Docker containers, but I'm not 100% sure that ditching Docker in production is a bad thing. (During development it's very useful, though)

This is only on some calls, the bulk of them use 1 or 2 calls, it really depends.

Docker + kubernetes works quite nicely..

I just think that maybe something is lost by binding things together at any point.

Yep, work on a proprietary linux distro and the whole thing is microservices based. We use both async RPCs (ZeroMQ and protobuf) and redis pub/sub for IPC. pretty comfy.

Yeah
HTTP is shit and better avoided for anything in the backend. Use something like Kafka instead. Also look for opportunities to use tcp, udp or http2 if you must

Yep

The way I upversion interfaces between services is by having a v1, v2, ... v(n) API. When v(n) is released, I keep around instances up for v(n-1) at the same time as having the new v(n) instances up. The v(n-1) APIs get a flag set up so that they serve deprecation warnings (or if the dependants are small in number enough, I'll alert the team responsible). Once v(n-1) isn't used anymore, I stop spawning those instances.