Iexecutive here

i will answer your development questions on the next greatest leap in computer technology.
you could say im close to the dev team
things i answer in this thread

>how does it work?
>why does my coin need iexec?
>i dont see how anybody will use these, please explain?
>as a developer how does this help me?

and all other general questions and questions relating to ethereum as well

Attached: iexechive.png (191x205, 15K)

Other urls found in this thread:

iex.ec/whitepaper/iExec-WPv3.0-English.pdf
aws.amazon.com/lambda/?hp=tile&story=matson
twitter.com/NSFWRedditGif

What does this token do? Who are your competitors?

Attached: ajbg.png (504x375, 167K)

when projects utilize iExec for their side chain scaling solutions, how will the coin be utilized by these projects? i'm thinking about Shopin, which claims that iExec performed over 1 million TPS under lab conditions. will Shopin need to purchase RLC coins or how will that go?

what makes larping so fun?

>1 million tx/s
Who the fuck pays for the power and hardware? Seriously. This is the workload for a proper cluster that costs millions to build and operate and the costs associated don't get less just because you spread it out, especially if its done on a slow infrastructure like a trustless blockchain or the internet in general.

TL;DR the economics and logistics of these claims make no sense

this is such a generic question im going to refer you to the whitepaper which answers everything- updated recently iex.ec/whitepaper/iExec-WPv3.0-English.pdf

this is a common misunderstanding- that TPS was performed because they were using BigChainDB - yes they will need to purchase rlc for anything that involves computations- this is similar to how AWS lambda works aws.amazon.com/lambda/?hp=tile&story=matson

being able to speak your mind and remain some level of anonymity, im able to speak freely without angering the higherups should i slip up

like i said earlier, the claim of 1million tps was only achieved using bigchainDB which is a seperate product that is used in federated chains

As far as scaling for decentralized computing power goes... Is it possible for other blockchains integrate merged mining like BTC through RSK has? What is the theoretical limit of processing power that can be hosted through the platform?

Riddle user here. RLC is one out of the four blockchain projects I'm invested in. I predict big data and AI to be huge usecases for iExec. B for I believe

What price do u predict this coin will hit eom and eoy?

It's not like the whole cloud computation/HPC claims didn't have the exact same issue

What are the token economics for this project?

>Typing as a 12 year old.
Gtfo

This is Jow Forums everyone here types like that. lol. There is an user in everyone.

:3

you could, it will never be as profitable due to asics- the iexec network is for generalized compute at this time

not sure what your saying here, please explain

staking of coins combined with our deploy on any blockchain strategy, we think this is effective in driving demand

Price predictions for EOY?

price predictions are foolish - honestly we just keep developing and finally its starting to get noticed.

>please explain
Its a cloud computing service

Cloud/high performance computing needs fast and efficient hardware and communication infrastructure.

Blockchains are neither, so where is it and who's paying for it? Should be an easy enough question?

>iex.ec/whitepaper/iExec-WPv3.0-English.pdf
Thanks for the answer...I guess having an answer for a generic question is now a finger point to a whitepaper. Smfh

Attached: eatshit.png (600x800, 547K)

Well, that's fair. You think it's still early to invest?

you have two levers in building a blockchain
>1)speed
>2) security

imagine they are able to each be dialed up to 10

ethereum for example would be a 10 on security and a 2 on speed

iexec is a 11 on speed and a 2 on security ( remediating this issue with PoCo)


instead of having to execute one task repeatedly across all nodes, iexec can send different pieces of a task to many nodes- splitting up the task to be worked on in tandem

When ETH switches to Proof of Stake, there is a high probability chance that ETH miners will move their power to the RLC platform b/c profit. Besides there are coins with less function that are worth more. Fundamentally RLC is looking very good. One should always proceed with caution when speculating but this is actually solid coin.

are you the same retard that comes to every freaking RLC thread and can't read simple whitepaper

this is driving me insane

its used for high performance computing or executing code that would typically bog down the ethereum network( or most any blockchain for that matter)

only competitor is golem but we see them as very far behind us technologically-

So I show up with a workload that's going to take eg 100MM CPU-hours. And the dataset is 700TB large. (Lets say we render the next disney movie)

You're telling me you're going to chunk all that data. Send it over the fucking internet. Calculate it on people's laptops n shit, and send it all back.

And this is competitive against simply renting timeslots on some supercomputer.

Did I get that right? Because that sounds like a load of wank to me

why would they do it all in one batch that large? wouldn't they do better breaking it down and having it rendered accordingly?

No this is my first time asking about this token.
Ty for answering. So I know Golem is going for I think graphics so which niche in the industry are you guys are going to use the computing power towards?

Attached: Oreally.jpg (325x305, 14K)

Super informative thread for a brainlet like myself
About to unironically buy 100k.

Attached: KfvbBdIDR5lqQhLR0kJ1kWJDWcxtCf_oIt5gbLjzLiQ.png (560x632, 141K)

That changes absolutely nothing about the total workload, network usage or ressources required, or the efficiency thereof

Could you give us a list of use cases for this project?

SONM is a competitor. COLX wants to be there but talks to much about what they are going to do instead of just doing it imo.

trying to do one data stream of 700 tb is way different than breaking it down by a magnitude of 50x or so

This.

Can you explain how RLC is related to intel?

How is rendering 700TB of raw video different than rendering 140TB x5? Even if it were, the whole point of cloud/HPC is to work with distributable workloads, so you're going to chunk it anyway.

you will be charged for going over a certain size of data, just like aws lambda does

yes it will be extremely competitive

right now we are targetting developers ( think cryptiokitties and the like) in time anyone could use the iexec network. we imagine many applications making calls off chain to cpmute on iexec- read up on AWS lambda if you want to see an example of how this works

Split load...always easier.

There's also SONM who are going after the fog network to begin with, iExec will move towards this once it's established for dApps.

iExec's team is an all star team compared to SONM and golem

(From the whitepaper)

Explain the Proof of Contribution and how it applies to say hodlers?

I have a low spec laptop, but my rlc stack of 9k is pretty sizeable. Will I be able to take advantage of the PoC?

What part don't you understand? It's the difference between trying to take all of the groceries at one time by yourself and having several people do it at one time. both result in having all of the groceries there at the end but one is way harder. If they split it up they will get a better result. No reason to do 700 tb at one time from one source.

You are aware that Netflix do so if their rendering and video processing on AWS with EC2 containers right?

The data is so big that it's bigger than the disk size on the EC2 containers, therefore they split the raw video into smaller chunks for processing and glue it on the other end before going into S3.

The exact same csn be done in iexec's network, except cheaper and faster

>yes it will be extremely competitive
You still fail to explain how a system of distributed general computation units can beat a centralized specialized system built for efficiency in terms of efficiency.

I highly doubt your rag-tag group of computers will beat the price/performance ratio of whatever dell or siemens sell on their cluster branch. Just saying.

>Ffs autocorrect

*all of their rendering

See

You realize a computing cluster is literally just a lot of computers, do you? Otherwise you should probably not contribute to a discussion about economics of efficient cloud architectures

not trying to be rude, we have seen sonm and we have been around long enough that the way they are going about it is completely wrong, its like building a house and starting out building the attic first (hints it wont work)

intel was interested in us because we are using there technology (SGX) to secure tasks on worker computers

I am not talking about the cluster of computers we are talking about sending a super large batch of data at one time which would be stupid and bottleneck how fast you can get it done.

Pls notice me senpai

i have to go now, train to catch, i will try to follow up with the other questions later tonight

So you get more internet connections and send the workload in parallel? Pretty sure that's not what this coin does, or what internet providers will let you do

He used an analogy to explain it to you and you completely missed the point lmao you're too dumb to buy RLC please don't

That's the beauty of the iExec network, when you submit your task contract with your requirements, you may very well select a cluster with the appropriate specs, but you can distribute the workload across multiple clusters with the appropriate specs.

No one is saying spare resource on a mobile phone is going to compete with a centralised super computer, what we're saying is if you split the task across the network you don't need each device to have superior processing power.

I can pay 10k mobiles to use a small amount of resource to render a few frames each asynchronously, or ask an expensive super computer to render it synchronously.

They might finish at the same time but the decentralised version will be much cheaper

>ask an expensive super computer to render it synchronously.
This is bot how high performance computing works dude.

There is no real-world application that requires non-parallelizable computation of that magnitude.
If you think this somehow isn't true I urge you to see that a supercomputer indeed has many CPUs and not one hyper powerful. Which also don't exist.

Interesting opinion on SONM, what do you think about Elastic Project?

Attached: 1516255719637.jpg (700x462, 82K)

Bamp

I wish for iExec to partner with Microsoft to integrate iExec into Microsoft Windows 10 so every average Windows user can sell their idle computing power easily in a single mouse click. Also partner with Apple to integrate iExec into MacOS. Partner with Linux to integrate iExec into Linux.

Please make this happen.

That would be crazy.
I could see them doing it.

Well intel is on board. So.... ya.

Bamp for op

this is a good thread explaining iexec
bump for anyone looking into it

when can i farm shrimp on it?

Whenever someone makes a shrimp farming dApp
probably within the year

Attached: comfyexecs.png (758x775, 521K)

Attached: whyopwhy.gif (600x488, 180K)

Other 3?

Are you gonna use Request Network? What do you think of their team?

This is a super comfy thread, thank you OP, iexecutives will do their part.

keepin it alive

scam coin

Bumperoni

when moon

never

thanks just sold 100k

Awesome thread.

bump these questions. come back OP

REQ and RLC will be using each other’s services

It's not financially sound. You see, the buyer will want to take the best offer.

In practice this means if you pay 0.1 USD/kwh but your "competitor" pays only 0.05 USD/kwh then he will out compete you and you will not sell your processing power to anyone.

It's more complicated than that, there are pools involved too so you will most likely always find work, maybe you won't make a sustainable margin (or even no margin at all) but you will always find work.