HTTP2

Are you ready for HTTP2 Jow Forums?

Attached: image02.gif (700x685, 19K)

Other urls found in this thread:

http02.cat-v.org/
wwwimages2.adobe.com/content/dam/acom/en/devnet/pdf/swf-file-format-spec.pdf
http2.akamai.com/demo
github.com/AmmarkoV/AmmarServer
twitter.com/NSFWRedditVideo

It's already enabled on my server, so yea

yes, I program servers in Go, and the Go devs introduced support for HTTP/2 in Go like a year ago

Why would you drink coffee and wine together? That's like asking for diarrhea

Why would that be? Not everyone has a weak stomach and gets diarrhea from everything.

If you are used to things they don't have extreme consequences.

Most frameworks won't support this shit for a long time.

Don't use shitty frameworks then
If you can't guarantee that your framework is getting active development to keep up with the fuckfest that is changing web standards, why bother using it?

botnet

Enjoy your peptic ulcer my man.

where do you work, if you dont mind me asking. Sounds like fun

my server has never not supported http/2, m8

What benefit does this have over 1.1?

the fact that I need to install apache through a third party repo to get http2 support is a disgrace.

Honestly, just be thankful web standards exist.
As someone who has to do email marketing from time to time, nonexistent standards are hell.

I'd rather fancy cat-v's HTTP 0.2.
http02.cat-v.org/

Attached: fuckyou.gif (700x685, 137K)

oh, I don't do dev for a living, I just code small stuff for fun
also, it was introduced with Go 1.6... more than a year ago

faster websites

>I have no idea what HTTP2 is

>I have no idea how bad they're going to shit up HTTP2 but I'll try to be condescending toward other people anyway

How are they going to fuck up a specification released 3 years ago?

Yeah man, it's just like adobe flash. Adobe flash is just a specification from years ago, that means it's fine.

Adobe Flash isn't a specification.

Unlike you, most people have company when they go out to eat.

It's enough that your frontend proxy supports it.

Already using it

All request use just one TCP connection and a request can start before the previous finished loading.
Also header compression. Http 1.1 doesn't compress cookies at all.

Aren't we already using html 5?
What is this thread about?

HTTP/2 is overly complicated for no reason. We wouldn't need it if it weren't for uselessly bloated web applications.

so all this HTTP2 thing is is a HTTP that has done away with cookies.

wow its fucking nothing.

sure but swf files have a specification wwwimages2.adobe.com/content/dam/acom/en/devnet/pdf/swf-file-format-spec.pdf

Have you tried killing yourself?

It doesn't do away with cookies, it compresses them.
It also has a noticeable speed boost. http2.akamai.com/demo

Haha, I won this argument.

Have you tried killing yourself though?

No u

No u

This red font gave my curator an eye cancer, thanks.

Attached: шакал.jpg (225x225, 6K)

we got him reddit XD

Looks bloated.

>only ordered coffee
>brings all the other useless shit anyways

He can't feel shit because he's full of diabeetus and gastric tornado.

You really don't understand what HTTP/2 is.

why is that pic missing the other server's hand handing you a bunch of jeova's witness papers? http2 isn't really http2 unless it has the server push shit

well obviously

Just add `http2` to your nginx listen line

My production server has used it for like 6 months already.

There's no excuse not to, really.

it looks easier to choke, asynchronous multithreaded transfers are a better design.

We got an evolution of Google's SPDY protocol.
Solves Google's problems, Facebook's problems, but not everyone else's problems.

I've already got it setup in my nginx configs and apps, you're a bit behind the times.

What are these "everyone else's problems" with http that you'd like to see solved?

So what is the new shit with http2.0? Why should I love or hate it?

Attached: http2 push example.png (700x685, 98K)

A protocol which doesn't stop working as soon as a certificate expires. One that doesn't even require encryption would be good.

>food analogy

>One that doesn't even require encryption would be good.

I don't understand Jow Forums sometimes

Always with the "botnet" and paranoia around data collection.

and then this.

We currently have the issue where http1.0 is ubiquitous, and insecure. Fixing this by using a single secure protocol makes far more sense.

That's a completely valid viewpoint. I don't like that websites going to have an effective lifetime of 90 days after Let's Encrypt deprecates a API endpoint.

O boy you don't know?
Technically the http2 specification does not need https, and if you control client and server, you can use http2 over an encrypted connection.
The reason browsers only use http2 over https is firewalls. Http2 is binary, and some firewalls would not allow binary over port 80, because they do not recognize binary as http, so some routers with build-in firewalls woud block http2 over and unsecured connection. To protect users from websites getting blocked by a dumb firewall, browsers decided to only allow http2 over an encrypted connection, because firewalls can't see the protocol.

IMAGINE BEING AT COMPUTERS

I AM FAT FUCK

we're not all the same person, you know
encryption fetishism actually kind of pisses me off because it imposes unnecessary limits on sites that would otherwise function fine on certain systems I like using and gives me no choice to just turn it the fuck off

IMAGINE BEING FAT SO COMPUTERS BEND HER OVER AND FUCK HER ON

Holy shit don't use that eye piercing shade of red ever again.

Based anti-botnet bro

>What is keep-alive..

>the transfer protocol is botnet
[citation fucking needed]
1 connection is still faster than 6

hmmm

Attached: OperavsFirefoxvsChromevsEdge.jpg (1920x1303, 1.65M)

Nice latency KEK

what's yours

>t. edgy faggot who has no idea what they're talking about
nice bait tho

Attached: 5279-16830-11723.png (1000x630, 151K)

Bump

Cool

http2.akamai.com/demo

Attached: 2018-04-30 10_59_16.png (944x499, 244K)

why is edge shit for HTTP/2

Attached: 2018-04-30 11_01_17.png (945x538, 298K)

How did that happen?

Attached: Knipsel.png (645x686, 182K)

No idea, re-did it and it's still worse, but less so.

Attached: 2018-04-30 11_08_45.png (1026x542, 314K)

yup, re-testing for half a dozen times and it's giving me the same.

Attached: 2018-04-30 11_10_13.png (961x528, 301K)

to be fair, edge is shit for everything.

Maybe it's just my internet?

Here is firefox

Attached: 2018-04-30 11_13_40.png (959x521, 258K)

And here is chrome.


Looks like it might just be my internet?

Attached: 2018-04-30 11_15_27.png (957x500, 245K)

HTTPS is mandatory under HTTP2

The standard itself doesn't have mandatory encryption but all the major browsers won't accept HTTP2 without encryption so in a way it is mandatory

>having internet so good that HTTP/2 is worse than HTTP/1.1

Wew, must be fiber or some shit.

Can someone explain this? If HTTP is faster with a better connection, why is HTTP2 going to be the standard? Especially as internet speeds improve over time?

Great
Now my ice cream is going to melt
And my coffee is going to get cold
While I eat my sandwich
Thanks HTTP2

Dis.

>gastric tornado.

Attached: index.jpg (229x220, 8K)

We already had web 2 and html 5, so wouldn't this be http 6 or something?

>HTML 5 exists
>downgrading to http2
Lmao

Attached: 1517350539401.gif (640x480, 1.26M)

Keep alive uses the same connection noob

> not knowing what it is
> not knowing your proxy should do the work

Keep alive does not add multiplexing.

The compression for seperate files will have highest ratios compared to an aggregate compression of a tarball with all the files.. (read how a basic compression algorithm works)

The only thing you get away with is the very very small delay when streaming the get requests from client to server..
There is no multiplexing its just a single tcp/ip socket

Cookie compression is a non issue for most sane websites they are session identifiers they shouldnt store a lot of data and bounce it back and forth

>The compression for seperate files will have highest ratios compared to an aggregate compression of a tarball with all the files
Are you implying http2 does not compress files individually? Pic related
>The only thing you get away with is the very very small delay when streaming the get requests from client to server.
>There is no multiplexing its just a single tcp/ip socket
Multiplexing is one of the biggest if not the biggest point of h2. It means that if a request that has a delay before the answer is send (for example something the server has to query a database for) the connection can still be used for other requests, something http1.1 did not do.
>Cookie compression is a non issue for most sane websites they are session identifiers they shouldn't store a lot of data and bounce it back and forth
Cookies are not the only headers being compressed, do not underestimate the shittyness of mobile internet.

Attached: Knipsel.png (892x102, 12K)

You do understand that multiplexing adds a connection overhead right? It is like http1.1 chunked encoding which again already exists and is not really used because it is a crap idea

Does this image imply that you can't server more than one resource per connection with 1.1? I think I had a task of writing a small server in C that was supposed to do this from day one.

Yes this is the keepalive flag

In any case it will not necessarily bad although improvements will be marginal
To be honest the real reason i dont like it is because am really bored to implement all the new things in my webserver ( github.com/AmmarkoV/AmmarServer )

I think you can but that one connection contains all those requests and it is compressed

>connection overhead
Then explain pic related.
It can, but only 1 request at a time, so if the server has to query a database and is not sending data yet that connection cannot be used for something else. Browsers default behavior is to open multiple connections to load thing simultaneously .

Attached: Knipsel.png (899x453, 234K)

Wasn't it also in 1.1? I remember i was reading only 1.1 protocol when I was writing my server so that's why I even know about the keep alive flag.

explain

those results make no sense and I can't reproduce them.

Attached: Knipsel.png (739x511, 79K)

So this won't help much when my page request speeds are already under 0.5s?

underrated