Why don't you use Alacritty?

Alacritty is the fastest terminal emulator in existence. Using the GPU for rendering enables optimizations that simply aren't possible without it. Alacritty currently supports GNU/Linux, BSD, macOS, and Windows.

Alacritty is a terminal emulator with a strong focus on simplicity and performance. With such a strong focus on performance, included features are carefully considered and you can always expect Alacritty to be blazingly fast. By making sane choices for defaults, Alacritty requires no additional setup. However, it does allow configuration of many aspects of the terminal.

The software is considered to be at a beta level of readiness -- there are a few missing features and bugs to be fixed, but it is already used by many as a daily driver.
github.com/jwilm/alacritty

It's also usable as a Wayland-native application.

Attached: 2019-05-13-091146_194x175_scrot.png (194x175, 17K)

Other urls found in this thread:

gitlab.com/anarcat/terms-benchmarks/
twitter.com/SFWRedditImages

Why does a terminal need GPU acceleration?

Do you think this was a good use of your time?

i dont see why i need any terminal emulator to be more complex than xterm.

Why not? GPUs aren't just for graphics anymore, and honestly I wonder whether they should be called GPUs anymore. They're really co-processing cards that can be used for a variety of purposes. See crypto mining as just an example of this. Why not also make use of this power to speed up terminal processes?

How much has changed since this benchmark from last year?
gitlab.com/anarcat/terms-benchmarks/

>Abandon All Hope, Ye Who Enter Here
xterm is a huge mess already.

>using the 3D accelerator to render text
absolutely degenerate. my CPU can software render the text and just throw that at the display adapter in exactly the same amount of time, or maybe even quicker because my CPU is actually attached to my system memory, than my 3D accelerator.
even if my accelerator was attached directly to memory and accessed it as fast as my CPU, would any theoretical difference be bigger than my displays refresh rate?

> bloating shit for the sake of bloat
Wew

this. unless the task is being ran on the accelerator card, it will just waste resources communicating with the rest of the system to render the text.

Not familiar with it, but skimming through it appears a lot has changed. It mentions the terminal not having a scrollback buffer, which is most certainly not true anymore. It also has transparency, you can paste into it with shift+control+v, it does wrap text, etc.

>my CPU can software render the text and just throw that at the display adapter in exactly the same amount of time, or maybe even quicker
Are you sure about that? Also i'm fairly certain that "GPU-accelerated" doesn't mean that the processing is literally only done on the GPU. It means that the GPU and CPU are working in tandem to speed up operations.

just time it lmao, alacritty is faster in every instance

>using the accelerator to accelerate doesn't use the accelerator

>doesn't JUST use the accelerator
ftfy

When you use a discrete GPU to render graphics, it's not being sent back to the rest of the system. It's going straight to the display. That's why you plug your display into your GPU instead of the motherboard if you're not using integrated graphics.

how's my GPU gonna know the output of my command without communicating with the CPU?

Ok, post results of `time seq 1000000`.

The CPU sends rendering commands and data to the GPU. The GPU is almost certainly fast enough to make this a net gain. Even if you're doing software rendering, if you have a discrete graphics card that's outputting to your monitor, the rendered image would have to be sent over anyway.

>his accelerator card cant work as a dumb display adapter

seq 1000000 0.01s user 0.38s system 98% cpu 0.395 total

Nice reading comprehension.

How could it be faster than instantaneous?

what kind of time output is that?
my results:
real 0m0.338s
user 0m0.008s
sys 0m0.326s

It's not actually "rendering" text on the GPU, it's just copying pre-rendered characters from one texture to another texture.
The GPU is what holds your system's framebuffer and scans it out to the monitor. Doing this work on the CPU means the GPU has to read a large chunk of pixel data from main memory every time the terminal redraws.
Sending some new indices to the GPU and issuing a glDrawElementsInstanced, which is what it looks like alacritty is doing, should be faster and more efficient - and therefore less degenerate - than doing everything on the CPU.

you tell me

Attached: XgAGdGA.png (524x29, 5K)

I won't use it until it has font ligature support. Fira code is just too good. I don't really care how fast a terminal emulator is if it can't render text correctly.

This sounds like a bad TV commercial

Attached: Screenshot_2019-05-12-12-09-28.png (1280x720, 486K)

ah, bash has its own time.

Attached: 2019-05-13-095712_375x20_scrot.png (375x20, 3K)

most terminals are ungodly slow

all terminals I've used are already fast enough. Why would I need anything faster, it's just displaying text, I've never noticed any lag editing files with vim or browsing files with ranger.