Why don't we just program in raw lambda calculus...

Why don't we just program in raw lambda calculus. It's the simplest Turing complete system with maximum flexibility and the most optimum portability and abstractness at the cost of being O log n for every operation.

Attached: Lambda-Calculus_3.png (550x335, 35K)

because of tranny mathlet brainlets ruining programming

By the time you abstracted it enough for productive things, wouldn’t everyone basically be already using a separate language again

Only where the outside of the machine interacts the inside of the machine.

You should program in Calculus of Construction.

Isn't that what lisp and it's various derivatives are bro? In which case, a select few are doing so...

DSL's for specific tasks are useful

You almost had me OP

>at the cost of being O log n for every operation.

Attached: laugh-lohan.gif (341x200, 204K)

this

Turing tarpit is a thing, you know?

because lambda calculus has no concern for limited memory, but real life does

>O(log n) for every operation
What the fuck does n represent in every operation?

input size

>the state of Jow Forums

but that makes no sense
let c be the cost of using the summation operator +(., .)
summing two numbers, the cost is c, not log(2)*c
summing one number you are using a (binary) operator incorrectly and the code does not run.

it's not about the absolute cost. it's about how the cost scales with the input
so if you increase n tenfold, the cost increases logarihmically to that

If the cost is constant for any two numbers, for the binary addition operator, the cost of summation will be c*(n-1) where n is the number of numbers, i.e inputs, hence it scales O(n-1)

1)Lambda calculus is not the simplest Turing complete computational model.
2)lambda calculus does not provide O(log n) for every operation(i don't even know what that are supposed to mean)

It's cool you are excited about lc but you can create new turing complete computational model easily. It's good only from theoretical perspect

finally someone else being autistic about this
what does the retard OP even mean

I came here just to post this, and saw someone already did.
>fistbump.png

I didn't expect to see a fello coq bro here

I don't think you've done an Algorithms course

>Why don't we just program in raw lambda calculus.
Why didn't you reply in raw lambda calculus? Stop being dumb!

Haskell is close enough and is also a fully-featured programming language.

>i don't even know what that are supposed to mean
He means that if every individual lambda function evaluation takes a similar unit of time, the best execution time you can achieve for any algorithm is O(log(n)), for any quantity n of any data structure you might choose as an input.
I don't know if this is true, but it certainly rings true.
Think about it. You don't even have numbers as primitives. 5+5 requires you to encode the numbers as sums of individual units, like for example y(y(y(y(y())))), and you have to go through each one of them in each operand to complete the operation 5+5.
Brainfuck has a similar problem, since arithmetic beyond increment and decrement isn't built into the language.
In a way these minimalistic turing complete languages are the slowest kind of languages, while things like CUDA which are meant for massive parallelism allow you to process complex data structures like arrays (up to a certain point of course since the hardware isn't infinite) in O(1) time.
Lambda calculus could be implemented in a way that the interpreter recognizes particular lambda function structures within the program and evaluates them in O(1), sure, but that would be hugely inefficient in comparison with a language which provides operations for things like addition instead of the average addition in a program taking multiple kilobytes of memory when encoded in lambda calculus and the interpreter having to do string comparison against a huge table of pre-computed results to detect additions and execute them in O(1), or have some kind of advanced AI that does a similar thing without having to store the result of every operation, but still, it'd be orders of magnitude slower than the slowest, shittiest interpreted language in current use.

Just an aside, I meant the idea that it's slower rings true, not the particular O(log(n)), I don't know about that.
I'm just saying most algorithms would be more like O(n*n) when in a sane language the same algorithm could be run in O(n) or so because the language implements more complex atomic types.

Because Jow Forums seems to hate anything with that many (((parentheses)))

Attached: 54F6BBC6-C983-4A3B-B25E-263BA0EC0115.png (980x742, 266K)

BUMP

That doesn't make sense either. If you need new type nobody stops you from creating one. It follows that you improve time complexity but you have worse spatial complexity. This holds in every language. Minimum Time complexity is a property of the problem, not of the programming language

okay, write a fizzbuzz

isnt every functional programming language using lambda calculus?

Yeah, I think I'm misusing the big O notation here. Ignore spatial complexity for a moment, since it does not matter for what I'm getting at.
Correct me if I'm wrong, but I think you are assuming that you are allowed to interpret the n in O(n) however you want, as long as your types are finite (can be expressed as a finite sequence of symbols, that is). In your interpretation, a vector of any arbitrary size can be thought as n=1, just like an integer type of arbitrary length in bits can be thought of n=1, but you have to keep that length the same when comparing how long the algorithm takes to process different inputs of different legnths. This of course breaks down if the types are allowed to hold an infinite amount of information, because in that case no matter what the size of the input is, you can always run an algorithm over it in O(1). You are only allowed to have a finite amount of types for the same reason.
In the interpretation I used, the size of the input is determined by the number of symbols the input is composed of that the language can understand natively, without allowing some sub-sequences (aka types) to be understood as individual elements.
However, even if big O notation isn't the right tool for the job, I think there's value for what I'm trying to express. What I'm trying to express is that some Turing-equivalent machines can inherently do more work per instruction (or per unit of time if each instruction takes a fixed amount of time) for some types than others, and this changes according to which particular Turing-equivalent machine you're using.

So what the user was getting at was that if you are only allowed to perform a certain numbers of operations per second for a finite set of types (and we are because physics) then every type not included in this set of natives types is going to be a composition of these native types, and it'll take multiple native operations on native types to emulate a single operation on this composite type we've created.
This means that you'll have to be very picky about which types this Turing-equivalent machine implements natively to make sure those are actually the types that you're going to use most often, as well as try to maximize the number of types your Turing-equivalent machine implements.
So if we are only allowed to have one Turing-equivalent machine that runs at a certain number of instructions per second, the one that does lambda calculus would be one of the shittiest ones, because you're basically only allowed to have one type (lambda functions).
Of course this assumes that you can only have one machine, in real life it might be worth it to have shittier machines if this allows you to have more machines because of economies of scale and so on.
But still, the tradeoff seems to be more on the CISC side, since most people seem to prefer a few really complex CPU cores that can do a lot of work in a single step than hundreds of really shitty cores that can do very little actual work per step, like a lambda calculus computer would be without implementing a bunch of AI to turn subsets of the program into single CISC instructions.

Lambda calculus is one particular very simple example of a functional programming language. Or maybe we can think of all functional programming languages as extensions to lambda calculus, so in that sense they are using it.

>so in that sense they are using it.
all the confirmation I needed, thanks.

test