Doomsday tech

>Are humanity doomed to die at the hands of its own technology?

The short answer is, probably not.

This video is a bit unrelated in that it's about the Fermi paradox, but he covers most of what you're worried about, so I'd give it a watch: youtube.com/watch?v=zmbldpqn0K4&list=PLIIOUpOge0LulClL2dHXh8TTOnCgRkLdU&index=3

The tldr is critical technology failures of the magnitude required to put humanity out for good, rather than merely temporarily inconvenience civilisation, are profoundly unlikely. Humanity wouldn't be made extinct by a nuclear war for instance. Remember, to be a real doomsday scenario it has to be an extinction level event. Even if we were knocked back to the stone age with only a few thousand survivors of whatever technological apocalypse we hit ourselves with, we'd be back eventually.

>We understand the problem domain for general intelligence too
For the 11th time, no we don't. Stop making things up. Basically what you're saying is that if you make your program JUST right, it will behave intelligently. It's a dumb argument.

That's in the long run. Yes, 100 years from now, the world will be fucking awesome, everybody will have basic income and access to free healthcare while robots and AI systems do all the work. But in the short term, the government will be in denial, refusing to admit that the traditional "labor for money" system no longer works.

Nature created general intelligence with nothing but time and brute force. I see no reason why we couldn't achieve the same given time and the scientific method.